1
|
Cao R, Zhang J, Zheng J, Wang Y, Brunner P, Willie JT, Wang S. A neural computational framework for face processing in the human temporal lobe. Curr Biol 2025; 35:1765-1778.e6. [PMID: 40118061 PMCID: PMC12014353 DOI: 10.1016/j.cub.2025.02.063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2024] [Revised: 02/03/2025] [Accepted: 02/27/2025] [Indexed: 03/23/2025]
Abstract
A key question in cognitive neuroscience is how unified identity representations emerge from visual inputs. Here, we recorded intracranial electroencephalography (iEEG) from the human ventral temporal cortex (VTC) and medial temporal lobe (MTL), as well as single-neuron activity in the MTL, to demonstrate how dense feature-based representations in the VTC are translated into sparse identity-based representations in the MTL. First, we characterized the spatiotemporal neural dynamics of face coding in the VTC and MTL. The VTC, particularly the fusiform gyrus, exhibits robust axis-based feature coding. Remarkably, MTL neurons encode a receptive field within the VTC neural feature space, constructed using VTC neural axes, thereby bridging dense feature and sparse identity representations. We further validated our findings using recordings from a macaque. Lastly, inter-areal interactions between the VTC and MTL provide the physiological basis of this computational framework. Together, we reveal the neurophysiological underpinnings of a computational framework that explains how perceptual information is translated into face identities.
Collapse
Affiliation(s)
- Runnan Cao
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63110, USA.
| | - Jie Zhang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63110, USA
| | - Jie Zheng
- Department of Biomedical Engineering, University of California, Davis, Davis, CA 95618, USA
| | - Yue Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63110, USA
| | - Peter Brunner
- Department of Neurosurgery, Washington University in St. Louis, St. Louis, MO 63110, USA
| | - Jon T Willie
- Department of Neurosurgery, Washington University in St. Louis, St. Louis, MO 63110, USA
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63110, USA; Department of Neurosurgery, Washington University in St. Louis, St. Louis, MO 63110, USA.
| |
Collapse
|
2
|
An NM, Roh H, Kim S, Kim JH, Im M. Machine Learning Techniques for Simulating Human Psychophysical Testing of Low-Resolution Phosphene Face Images in Artificial Vision. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2025; 12:e2405789. [PMID: 39985243 PMCID: PMC12005743 DOI: 10.1002/advs.202405789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 01/18/2025] [Indexed: 02/24/2025]
Abstract
To evaluate the quality of artificial visual percepts generated by emerging methodologies, researchers often rely on labor-intensive and tedious human psychophysical experiments. These experiments necessitate repeated iterations upon any major/minor modifications in the hardware/software configurations. Here, the capacity of standard machine learning (ML) models is investigated to accurately replicate quaternary match-to-sample tasks using low-resolution facial images represented by arrays of phosphenes as input stimuli. Initially, the performance of the ML models trained to approximate innate human facial recognition abilities across a dataset comprising 3600 phosphene images of human faces is analyzed. Subsequently, due to the time constraints and the potential for subject fatigue, the psychophysical test is limited to presenting only 720 low-resolution phosphene images to 36 human subjects. Notably, the superior model adeptly mirrors the behavioral trend of human subjects, offering precise predictions for 8 out of 9 phosphene quality levels on the overlapping test queries. Subsequently, human recognition performances for untested phosphene images are predicted, streamlining the process and minimizing the need for additional psychophysical tests. The findings underscore the transformative potential of ML in reshaping the research paradigm of visual prosthetics, facilitating the expedited advancement of prostheses.
Collapse
Affiliation(s)
- Na Min An
- Brain Science InstituteKorea Institute of Science and Technology (KIST)Seoul02792Republic of Korea
- Present address:
Kim Jaechul Graduate School of AIKAISTSeoul02455Republic of Korea
| | - Hyeonhee Roh
- Brain Science InstituteKorea Institute of Science and Technology (KIST)Seoul02792Republic of Korea
| | - Sein Kim
- Brain Science InstituteKorea Institute of Science and Technology (KIST)Seoul02792Republic of Korea
| | - Jae Hun Kim
- Brain Science InstituteKorea Institute of Science and Technology (KIST)Seoul02792Republic of Korea
- Sensor System Research CenterAdvanced Materials and Systems Research DivisionKISTSeoul02792Republic of Korea
| | - Maesoon Im
- Brain Science InstituteKorea Institute of Science and Technology (KIST)Seoul02792Republic of Korea
- Division of Bio‐Medical Science and TechnologyUniversity of Science and Technology (UST)Seoul02792Republic of Korea
- KHU‐KIST Department of Converging Science and TechnologyKyung Hee UniversitySeoul02447Republic of Korea
| |
Collapse
|
3
|
Casile A, Cordier A, Kim JG, Cometa A, Madsen JR, Stone S, Ben-Yosef G, Ullman S, Anderson W, Kreiman G. Neural correlates of minimal recognizable configurations in the human brain. Cell Rep 2025; 44:115429. [PMID: 40096088 PMCID: PMC12045337 DOI: 10.1016/j.celrep.2025.115429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 07/24/2024] [Accepted: 02/21/2025] [Indexed: 03/19/2025] Open
Abstract
Inferring object identity from incomplete information is a ubiquitous challenge for the visual system. Here, we study the neural mechanisms underlying processing of minimally recognizable configurations (MIRCs) and their subparts, which are unrecognizable (sub-MIRCs). MIRCs and sub-MIRCs are very similar at the pixel level, yet they lead to a dramatic gap in recognition performance. To evaluate how the brain processes such images, we invasively record human neurophysiological responses. Correct identification of MIRCs is associated with a dynamic interplay of feedback and feedforward mechanisms between frontal and temporal areas. Interpretation of sub-MIRC images improves dramatically after exposure to the corresponding full objects. This rapid and unsupervised learning is accompanied by changes in neural responses in the temporal cortex. These results are at odds with purely feedforward models of object recognition and suggest a role for the frontal lobe in providing top-down signals related to object identity in difficult visual tasks.
Collapse
Affiliation(s)
- Antonino Casile
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, 98122 Messina, Italy
| | - Aurelie Cordier
- Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Jiye G Kim
- Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Andrea Cometa
- MoMiLab, IMT School for Advanced Studies, 55100 Lucca, Italy
| | - Joseph R Madsen
- Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Scellig Stone
- Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | | | - Shimon Ullman
- Weizmann Institute, Rehovot, Israel; Center for Brains, Minds and Machines, Cambridge, MA 02142, USA
| | - William Anderson
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Gabriel Kreiman
- Children's Hospital, Harvard Medical School, Boston, MA 02115, USA; Center for Brains, Minds and Machines, Cambridge, MA 02142, USA.
| |
Collapse
|
4
|
Moran C, Johnson PA, Hogendoorn H, Landau AN. The Representation of Stimulus Features during Stable Fixation and Active Vision. J Neurosci 2025; 45:e1652242024. [PMID: 39880676 PMCID: PMC11924989 DOI: 10.1523/jneurosci.1652-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Revised: 11/07/2024] [Accepted: 11/25/2024] [Indexed: 01/31/2025] Open
Abstract
Predictive updating of an object's spatial coordinates from presaccade to postsaccade contributes to stable visual perception. Whether object features are predictively remapped remains contested. We set out to characterize the spatiotemporal dynamics of feature processing during stable fixation and active vision. To do so, we applied multivariate decoding methods to EEG data collected while human participants (male and female) viewed brief visual stimuli. Stimuli appeared at different locations across the visual field at either high or low spatial frequency (SF). During fixation, classifiers were trained to decode SF presented at one parafoveal location and cross-tested on SF from either the same, adjacent, or more peripheral locations. When training and testing on the same location, SF was classified shortly after stimulus onset (∼79 ms). Decoding of SF at locations farther from the trained location emerged later (∼144-295 ms), with decoding latency modulated by eccentricity. This analysis provides a detailed time course for the spread of feature information across the visual field. Next, we investigated how active vision impacts the emergence of SF information. In the presence of a saccade, the decoding time of peripheral SF at parafoveal locations was earlier, indicating predictive anticipation of SF due to the saccade. Crucially, however, this predictive effect was not limited to the specific remapped location. Rather, peripheral SF was correctly classified, at an accelerated time course, at all parafoveal positions. This indicates spatially coarse, predictive anticipation of stimulus features during active vision, likely enabling a smooth transition on saccade landing.
Collapse
Affiliation(s)
- Caoimhe Moran
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Departments of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Philippa A Johnson
- Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, Leiden 2333 AK, Netherlands
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- School of Psychology and Counselling, Queensland University of Technology, St Lucia, Brisbane, Queensland 4072, Australia
| | - Ayelet N Landau
- Departments of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Experimental Psychology, University College London, London WC1H 0AP, United Kingdom
| |
Collapse
|
5
|
Mengers V, Roth N, Brock O, Obermayer K, Rolfs M. A robotics-inspired scanpath model reveals the importance of uncertainty and semantic object cues for gaze guidance in dynamic scenes. J Vis 2025; 25:6. [PMID: 39928323 PMCID: PMC11812614 DOI: 10.1167/jov.25.2.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 01/01/2025] [Indexed: 02/11/2025] Open
Abstract
The objects we perceive guide our eye movements when observing real-world dynamic scenes. Yet, gaze shifts and selective attention are critical for perceiving details and refining object boundaries. Object segmentation and gaze behavior are, however, typically treated as two independent processes. Here, we present a computational model that simulates these processes in an interconnected manner and allows for hypothesis-driven investigations of distinct attentional mechanisms. Drawing on an information processing pattern from robotics, we use a Bayesian filter to recursively segment the scene, which also provides an uncertainty estimate for the object boundaries that we use to guide active scene exploration. We demonstrate that this model closely resembles observers' free viewing behavior on a dataset of dynamic real-world scenes, measured by scanpath statistics, including foveation duration and saccade amplitude distributions used for parameter fitting and higher-level statistics not used for fitting. These include how object detections, inspections, and returns are balanced and a delay of returning saccades without an explicit implementation of such temporal inhibition of return. Extensive simulations and ablation studies show that uncertainty promotes balanced exploration and that semantic object cues are crucial to forming the perceptual units used in object-based attention. Moreover, we show how our model's modular design allows for extensions, such as incorporating saccadic momentum or presaccadic attention, to further align its output with human scanpaths.
Collapse
Affiliation(s)
- Vito Mengers
- Technische Universität Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | - Nicolas Roth
- Technische Universität Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | - Oliver Brock
- Technische Universität Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | - Klaus Obermayer
- Technische Universität Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| | - Martin Rolfs
- Humboldt-Universtät zu Berlin, Berlin, Germany
- Science of Intelligence, Research Cluster of Excellence, Berlin, Germany
| |
Collapse
|
6
|
Hu Y, Mohsenzadeh Y. Neural processing of naturalistic audiovisual events in space and time. Commun Biol 2025; 8:110. [PMID: 39843939 PMCID: PMC11754444 DOI: 10.1038/s42003-024-07434-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds. Our findings reveal early asymmetrical cross-modal interaction, with acoustic information represented in both early visual and auditory regions, while visual information only identified in visual cortices. The visual and auditory features were processed with similar onset but different temporal dynamics. High-level categorical and semantic information emerged in multisensory association areas later in time, indicating late cross-modal integration and its distinct role in converging conceptual information. Comparing neural representations to a two-branch deep neural network model highlighted the necessity of early cross-modal connections to build a biologically plausible model of audiovisual perception. With EEG-fMRI fusion, we provided a spatiotemporally resolved account of neural activity during the processing of naturalistic audiovisual stimuli.
Collapse
Affiliation(s)
- Yu Hu
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, Western University, London, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
- Department of Computer Science, Western University, London, ON, Canada.
| |
Collapse
|
7
|
Nakamura D, Kaji S, Kanai R, Hayashi R. Unsupervised method for representation transfer from one brain to another. Front Neuroinform 2024; 18:1470845. [PMID: 39669979 PMCID: PMC11634869 DOI: 10.3389/fninf.2024.1470845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 11/04/2024] [Indexed: 12/14/2024] Open
Abstract
Although the anatomical arrangement of brain regions and the functional structures within them are similar across individuals, the representation of neural information, such as recorded brain activity, varies among individuals owing to various factors. Therefore, appropriate conversion and translation of brain information is essential when decoding neural information using a model trained using another person's data or to achieving brain-to-brain communication. We propose a brain representation transfer method that involves transforming a data representation obtained from one person's brain into that obtained from another person's brain, without relying on corresponding label information between the transferred datasets. We defined the requirements to enable such brain representation transfer and developed an algorithm that distills the assumption of common similarity structure across the brain datasets into a rotational and reflectional transformation across low-dimensional hyperspheres using encoders for non-linear dimensional reduction. We first validated our proposed method using data from artificial neural networks as substitute neural activity and examining various experimental factors. We then evaluated the applicability of our method to real brain activity using functional magnetic resonance imaging response data acquired from human participants. The results of these validation experiments showed that our method successfully performed representation transfer and achieved transformations in some cases that were similar to those obtained when using corresponding label information. Additionally, we reconstructed images from individuals' data without training personalized decoders by performing brain representation transfer. The results suggest that our unsupervised transfer method is useful for the reapplication of existing models personalized to specific participants and datasets to decode brain information from other individuals. Our findings also serve as a proof of concept for the methodology, enabling the exchange of the latent properties of neural information representing individuals' sensations.
Collapse
Affiliation(s)
- Daiki Nakamura
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan
| | - Shizuo Kaji
- Institute of Mathematics for Industry, Kyushu University, Fukuoka, Japan
| | | | - Ryusuke Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan
| |
Collapse
|
8
|
Jovanović V, Petrušić I, Savić A, Ković V. Processing of visual hapaxes in picture naming task: An event-related potential study. Int J Psychophysiol 2024; 203:112394. [PMID: 39053735 DOI: 10.1016/j.ijpsycho.2024.112394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 06/28/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
Object recognition and visual categorization are typically swift and seemingly effortless tasks that involve numerous underlying processes. In our investigation, we utilized a picture naming task to explore the processing of rarely encountered objects (visual hapaxes) in comparison to common objects. Our aim was to determine the stage at which these rare objects are classified as unnamable. Contrary to our expectations and in contrast to some prior research on event-related potentials (ERPs) with novel and atypical objects, no differences between conditions were observed in the late time windows corresponding to the P300 or N400 components. However, distinctive patterns between hapaxes and common objects surfaced in three early time windows, corresponding to the posterior N1 and P2 waves, as well as a widespread N2 wave. According to the ERP data, the differentiation between hapaxes and common objects occurs within the first 380 ms of the processing line, involving only limited and indirect top-down influence.
Collapse
Affiliation(s)
- Vojislav Jovanović
- University of Belgrade, Faculty of Philosophy, Department of Psychology, Laboratory for Neurocognition and Applied Cognition, 11000 Belgrade, Serbia.
| | - Igor Petrušić
- University of Belgrade, Faculty of Physical Chemistry, Laboratory for Advanced Analysis of Neuroimages, 11000 Belgrade, Serbia
| | - Andrej Savić
- University of Belgrade, School of Electrical Engineering, Science and Research Centre, 11000 Belgrade, Serbia
| | - Vanja Ković
- University of Belgrade, Faculty of Philosophy, Department of Psychology, Laboratory for Neurocognition and Applied Cognition, 11000 Belgrade, Serbia
| |
Collapse
|
9
|
Chen Y, Beech P, Yin Z, Jia S, Zhang J, Yu Z, Liu JK. Decoding dynamic visual scenes across the brain hierarchy. PLoS Comput Biol 2024; 20:e1012297. [PMID: 39093861 PMCID: PMC11324145 DOI: 10.1371/journal.pcbi.1012297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/14/2024] [Accepted: 07/03/2024] [Indexed: 08/04/2024] Open
Abstract
Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.
Collapse
Affiliation(s)
- Ye Chen
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Peter Beech
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Ziwei Yin
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| | - Shanshan Jia
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jiayi Zhang
- Institutes of Brain Science, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
10
|
Motlagh SC, Joanisse M, Wang B, Mohsenzadeh Y. Unveiling the neural dynamics of conscious perception in rapid object recognition. Neuroimage 2024; 296:120668. [PMID: 38848982 DOI: 10.1016/j.neuroimage.2024.120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 05/23/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024] Open
Abstract
Our brain excels at recognizing objects, even when they flash by in a rapid sequence. However, the neural processes determining whether a target image in a rapid sequence can be recognized or not remains elusive. We used electroencephalography (EEG) to investigate the temporal dynamics of brain processes that shape perceptual outcomes in these challenging viewing conditions. Using naturalistic images and advanced multivariate pattern analysis (MVPA) techniques, we probed the brain dynamics governing conscious object recognition. Our results show that although initially similar, the processes for when an object can or cannot be recognized diverge around 180 ms post-appearance, coinciding with feedback neural processes. Decoding analyses indicate that gist perception (partial conscious perception) can occur at ∼120 ms through feedforward mechanisms. In contrast, object identification (full conscious perception of the image) is resolved at ∼190 ms after target onset, suggesting involvement of recurrent processing. These findings underscore the importance of recurrent neural connections in object recognition and awareness in rapid visual presentations.
Collapse
Affiliation(s)
- Saba Charmi Motlagh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Marc Joanisse
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Department of Psychology, Western University, London, Ontario, Canada
| | - Boyu Wang
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada.
| |
Collapse
|
11
|
Abdel-Ghaffar SA, Huth AG, Lescroart MD, Stansbury D, Gallant JL, Bishop SJ. Occipital-temporal cortical tuning to semantic and affective features of natural images predicts associated behavioral responses. Nat Commun 2024; 15:5531. [PMID: 38982092 PMCID: PMC11233618 DOI: 10.1038/s41467-024-49073-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 05/22/2024] [Indexed: 07/11/2024] Open
Abstract
In everyday life, people need to respond appropriately to many types of emotional stimuli. Here, we investigate whether human occipital-temporal cortex (OTC) shows co-representation of the semantic category and affective content of visual stimuli. We also explore whether OTC transformation of semantic and affective features extracts information of value for guiding behavior. Participants viewed 1620 emotional natural images while functional magnetic resonance imaging data were acquired. Using voxel-wise modeling we show widespread tuning to semantic and affective image features across OTC. The top three principal components underlying OTC voxel-wise responses to image features encoded stimulus animacy, stimulus arousal and interactions of animacy with stimulus valence and arousal. At low to moderate dimensionality, OTC tuning patterns predicted behavioral responses linked to each image better than regressors directly based on image features. This is consistent with OTC representing stimulus semantic category and affective content in a manner suited to guiding behavior.
Collapse
Affiliation(s)
- Samy A Abdel-Ghaffar
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Google LLC, San Francisco, CA, USA
| | - Alexander G Huth
- Centre for Theoretical and Computational Neuroscience, UT Austin, Austin, TX, 78712, USA
| | - Mark D Lescroart
- Department of Psychology University of Nevada Reno, Reno, NV, 89557, USA
| | - Dustin Stansbury
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
| | - Jack L Gallant
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA
| | - Sonia J Bishop
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA.
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA.
- School of Psychology, Trinity College Dublin, Dublin, Ireland.
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| |
Collapse
|
12
|
Subramaniam V, Conwell C, Wang C, Kreiman G, Katz B, Cases I, Barbu A. Revealing Vision-Language Integration in the Brain with Multimodal Networks. ARXIV 2024:arXiv:2406.14481v1. [PMID: 38947929 PMCID: PMC11213144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
We use (multi)modal deep neural networks (DNNs) to probe for sites of multimodal integration in the human brain by predicting stereoen-cephalography (SEEG) recordings taken while human subjects watched movies. We operationalize sites of multimodal integration as regions where a multimodal vision-language model predicts recordings better than unimodal language, unimodal vision, or linearly-integrated language-vision models. Our target DNN models span different architectures (e.g., convolutional networks and transformers) and multimodal training techniques (e.g., cross-attention and contrastive learning). As a key enabling step, we first demonstrate that trained vision and language models systematically outperform their randomly initialized counterparts in their ability to predict SEEG signals. We then compare unimodal and multimodal models against one another. Because our target DNN models often have different architectures, number of parameters, and training sets (possibly obscuring those differences attributable to integration), we carry out a controlled comparison of two models (SLIP and SimCLR), which keep all of these attributes the same aside from input modality. Using this approach, we identify a sizable number of neural sites (on average 141 out of 1090 total sites or 12.94%) and brain regions where multimodal integration seems to occur. Additionally, we find that among the variants of multimodal training techniques we assess, CLIP-style training is the best suited for downstream prediction of the neural activity in these sites.
Collapse
Affiliation(s)
| | - Colin Conwell
- Department of Cognitive Science, Johns Hopkins University
| | | | | | | | | | | |
Collapse
|
13
|
Djambazovska S, Zafer A, Ramezanpour H, Kreiman G, Kar K. The Impact of Scene Context on Visual Object Recognition: Comparing Humans, Monkeys, and Computational Models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.27.596127. [PMID: 38854011 PMCID: PMC11160639 DOI: 10.1101/2024.05.27.596127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
During natural vision, we rarely see objects in isolation but rather embedded in rich and complex contexts. Understanding how the brain recognizes objects in natural scenes by integrating contextual information remains a key challenge. To elucidate neural mechanisms compatible with human visual processing, we need an animal model that behaves similarly to humans, so that inferred neural mechanisms can provide hypotheses relevant to the human brain. Here we assessed whether rhesus macaques could model human context-driven object recognition by quantifying visual object identification abilities across variations in the amount, quality, and congruency of contextual cues. Behavioral metrics revealed strikingly similar context-dependent patterns between humans and monkeys. However, neural responses in the inferior temporal (IT) cortex of monkeys that were never explicitly trained to discriminate objects in context, as well as current artificial neural network models, could only partially explain this cross-species correspondence. The shared behavioral variance unexplained by context-naive neural data or computational models highlights fundamental knowledge gaps. Our findings demonstrate an intriguing alignment of human and monkey visual object processing that defies full explanation by either brain activity in a key visual region or state-of-the-art models.
Collapse
Affiliation(s)
- Sara Djambazovska
- York University, Department of Biology and Centre for Vision Research, Toronto, Canada
- Children’s Hospital, Harvard Medical School, MA, USA
| | - Anaa Zafer
- York University, Department of Biology and Centre for Vision Research, Toronto, Canada
| | - Hamidreza Ramezanpour
- York University, Department of Biology and Centre for Vision Research, Toronto, Canada
| | | | - Kohitij Kar
- York University, Department of Biology and Centre for Vision Research, Toronto, Canada
| |
Collapse
|
14
|
Panagiotaropoulos TI. An integrative view of the role of prefrontal cortex in consciousness. Neuron 2024; 112:1626-1641. [PMID: 38754374 DOI: 10.1016/j.neuron.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/16/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
The involvement of the prefrontal cortex (PFC) in consciousness is an ongoing focus of intense investigation. An important question is whether representations of conscious contents and experiences in the PFC are confounded by post-perceptual processes related to cognitive functions. Here, I review recent findings suggesting that neuronal representations of consciously perceived contents-in the absence of post-perceptual processes-can indeed be observed in the PFC. Slower ongoing fluctuations in the electrophysiological state of the PFC seem to control the stability and updates of these prefrontal representations of conscious awareness. In addition to conscious perception, the PFC has been shown to play a critical role in controlling the levels of consciousness as observed during anesthesia, while prefrontal lesions can result in severe loss of perceptual awareness. Together, the convergence of these processes in the PFC suggests its integrative role in consciousness and highlights the complex nature of consciousness itself.
Collapse
|
15
|
Cone JJ, Mitchell AO, Parker RK, Maunsell JHR. Stimulus-dependent differences in cortical versus subcortical contributions to visual detection in mice. Curr Biol 2024; 34:1940-1952.e5. [PMID: 38640924 PMCID: PMC11080572 DOI: 10.1016/j.cub.2024.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 02/08/2024] [Accepted: 03/27/2024] [Indexed: 04/21/2024]
Abstract
The primary visual cortex (V1) and the superior colliculus (SC) both occupy stations early in the processing of visual information. They have long been thought to perform distinct functions, with the V1 supporting the perception of visual features and the SC regulating orienting to visual inputs. However, growing evidence suggests that the SC supports the perception of many of the same visual features traditionally associated with the V1. To distinguish V1 and SC contributions to visual processing, it is critical to determine whether both areas causally contribute to the detection of specific visual stimuli. Here, mice reported changes in visual contrast or luminance near their perceptual threshold while white noise patterns of optogenetic stimulation were delivered to V1 or SC inhibitory neurons. We then performed a reverse correlation analysis on the optogenetic stimuli to estimate a neuronal-behavioral kernel (NBK), a moment-to-moment estimate of the impact of V1 or SC inhibition on stimulus detection. We show that the earliest moments of stimulus-evoked activity in the SC are critical for the detection of both luminance and contrast changes. Strikingly, there was a robust stimulus-aligned modulation in the V1 contrast-detection NBK but no sign of a comparable modulation for luminance detection. The data suggest that behavioral detection of visual contrast depends on both V1 and SC spiking, whereas mice preferentially use SC activity to detect changes in luminance. Electrophysiological recordings showed that neurons in both the SC and V1 responded strongly to both visual stimulus types, while the reverse correlation analysis reveals when these neuronal signals actually contribute to visually guided behaviors.
Collapse
Affiliation(s)
- Jackson J Cone
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA.
| | - Autumn O Mitchell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - Rachel K Parker
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - John H R Maunsell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| |
Collapse
|
16
|
Lee K, Dora S, Mejias JF, Bohte SM, Pennartz CMA. Predictive coding with spiking neurons and feedforward gist signaling. Front Comput Neurosci 2024; 18:1338280. [PMID: 38680678 PMCID: PMC11045951 DOI: 10.3389/fncom.2024.1338280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/14/2024] [Indexed: 05/01/2024] Open
Abstract
Predictive coding (PC) is an influential theory in neuroscience, which suggests the existence of a cortical architecture that is constantly generating and updating predictive representations of sensory inputs. Owing to its hierarchical and generative nature, PC has inspired many computational models of perception in the literature. However, the biological plausibility of existing models has not been sufficiently explored due to their use of artificial neurons that approximate neural activity with firing rates in the continuous time domain and propagate signals synchronously. Therefore, we developed a spiking neural network for predictive coding (SNN-PC), in which neurons communicate using event-driven and asynchronous spikes. Adopting the hierarchical structure and Hebbian learning algorithms from previous PC neural network models, SNN-PC introduces two novel features: (1) a fast feedforward sweep from the input to higher areas, which generates a spatially reduced and abstract representation of input (i.e., a neural code for the gist of a scene) and provides a neurobiological alternative to an arbitrary choice of priors; and (2) a separation of positive and negative error-computing neurons, which counters the biological implausibility of a bi-directional error neuron with a very high baseline firing rate. After training with the MNIST handwritten digit dataset, SNN-PC developed hierarchical internal representations and was able to reconstruct samples it had not seen during training. SNN-PC suggests biologically plausible mechanisms by which the brain may perform perceptual inference and learning in an unsupervised manner. In addition, it may be used in neuromorphic applications that can utilize its energy-efficient, event-driven, local learning, and parallel information processing nature.
Collapse
Affiliation(s)
- Kwangjun Lee
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Shirin Dora
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, School of Science, Loughborough University, Loughborough, United Kingdom
| | - Jorge F. Mejias
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Sander M. Bohte
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Machine Learning Group, Centre of Mathematics and Computer Science, Amsterdam, Netherlands
| | - Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
17
|
Ziereis A, Schacht A. Additive effects of emotional expression and stimulus size on the perception of genuine and artificial facial expressions: an ERP study. Sci Rep 2024; 14:5574. [PMID: 38448642 PMCID: PMC10918072 DOI: 10.1038/s41598-024-55678-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/26/2024] [Indexed: 03/08/2024] Open
Abstract
Seeing an angry individual in close physical proximity can not only result in a larger retinal representation of that individual and an enhanced resolution of emotional cues, but may also increase motivation for rapid visual processing and action preparation. The present study investigated the effects of stimulus size and emotional expression on the perception of happy, angry, non-expressive, and scrambled faces. We analyzed event-related potentials (ERPs) and behavioral responses of N = 40 participants who performed a naturalness classification task on real and artificially created facial expressions. While the emotion-related effects on accuracy for recognizing authentic expressions were modulated by stimulus size, ERPs showed only additive effects of stimulus size and emotional expression, with no significant interaction with size. This contrasts with previous research on emotional scenes and words. Effects of size were present in all included ERPs, whereas emotional expressions affected the N170, EPN, and LPC, irrespective of size. These results imply that the decoding of emotional valence in faces can occur even for small stimuli. Supra-additive effects in faces may necessitate larger size ranges or dynamic stimuli that increase arousal.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, 37073, Göttingen, Germany.
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, 37073, Göttingen, Germany
| |
Collapse
|
18
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Amaral L, Besson G, Caparelli-Dáquer E, Bergström F, Almeida J. Temporal differences and commonalities between hand and tool neural processing. Sci Rep 2023; 13:22270. [PMID: 38097608 PMCID: PMC10721913 DOI: 10.1038/s41598-023-48180-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/23/2023] [Indexed: 12/17/2023] Open
Abstract
Object recognition is a complex cognitive process that relies on how the brain organizes object-related information. While spatial principles have been extensively studied, less studied temporal dynamics may also offer valuable insights into this process, particularly when neural processing overlaps for different categories, as it is the case of the categories of hands and tools. Here we focus on the differences and/or similarities between the time-courses of hand and tool processing under electroencephalography (EEG). Using multivariate pattern analysis, we compared, for different time points, classification accuracy for images of hands or tools when compared to images of animals. We show that for particular time intervals (~ 136-156 ms and ~ 252-328 ms), classification accuracy for hands and for tools differs. Furthermore, we show that classifiers trained to differentiate between tools and animals generalize their learning to classification of hand stimuli between ~ 260-320 ms and ~ 376-500 ms after stimulus onset. Classifiers trained to distinguish between hands and animals, on the other hand, were able to extend their learning to the classification of tools at ~ 150 ms. These findings suggest variations in semantic features and domain-specific differences between the two categories, with later-stage similarities potentially related to shared action processing for hands and tools.
Collapse
Affiliation(s)
- L Amaral
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA.
| | - G Besson
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - E Caparelli-Dáquer
- Laboratory of Electrical Stimulation of the Nervous System (LabEEL), Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - F Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | - J Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
20
|
Yin M, Lee EJ. Planet earth calling: unveiling the brain's response to awe and driving eco-friendly consumption. Front Neurosci 2023; 17:1251685. [PMID: 37849890 PMCID: PMC10577226 DOI: 10.3389/fnins.2023.1251685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 09/13/2023] [Indexed: 10/19/2023] Open
Abstract
Eco-friendly consumption is important for solving climate crisis and moving humanity toward a better future. However, few consumers are willing to pay premiums for eco-friendly products. We investigated the psychological and neural factors that can increase eco-friendly consumption. We propose an experience of awe, in which the individual self is temporarily attenuated as the importance of beings other than oneself increases. Behavioral (Study 1) and functional magnetic resonance imaging (fMRI; Study 2) experiments were conducted to explore the awe mechanisms through which climate crisis messages lead to eco-friendly consumption. In Study 1, we found participants felt awe when exposed to climate crisis messages, and their choice of eco-friendly consumption increased. In Study 2, we found that when individuals were exposed to messages depicting the climate crisis (as opposed to a control stimulus), their brains exhibited a lower level of activation in the self-awareness processing and a higher level of activation in external attention processing areas. These results suggest that the awe experience plays an important role in promoting eco-friendly consumption. Marketing must evolve from satisfying basic individual needs to a high level for the well-being of humanity, the planet, and the biosphere. This study sheds light on our understanding of human perceptions of the climate crisis and suggests an effective communication strategy to increase individuals' eco-friendly actions.
Collapse
Affiliation(s)
- Meiling Yin
- Business School, Sungkyunkwan University, Seoul, Republic of Korea
| | - Eun-Ju Lee
- Business School, Sungkyunkwan University, Seoul, Republic of Korea
- Neuro Intelligence Center, Sungkyunkwan University, Seoul, Republic of Korea
| |
Collapse
|
21
|
Jimenez M, Prieto A, Gómez P, Hinojosa JA, Montoro PR. Masked priming under the Bayesian microscope: Exploring the integration of local elements into global shape through Bayesian model comparison. Conscious Cogn 2023; 115:103568. [PMID: 37708623 DOI: 10.1016/j.concog.2023.103568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 08/24/2023] [Accepted: 08/24/2023] [Indexed: 09/16/2023]
Abstract
To investigate whether local elements are grouped into global shapes in the absence of awareness, we introduced two different masked priming designs (e.g., the classic dissociation paradigm and a trial-wise probe and prime discrimination task) and collected both objective (i.e., performance based) and subjective (using the perceptual awareness scale [PAS]) awareness measures. Prime visibility was manipulated using three different prime-mask stimulus onset asynchronies (SOAs) and an unmasked condition. Our results showed that assessing prime visibility trial-wise heavily interfered with masked priming preventing any prime facilitation effect. The implementation of Bayesian regression models, which predict priming effects for participants whose awareness levels are at chance level, provided strong evidence in favor of the hypothesis that local elements group into global shape in the absence of awareness for SOAs longer than 50 ms, suggesting that prime-mask SOA is a crucial factor in the processing of the global shape without awareness.
Collapse
Affiliation(s)
- Mikel Jimenez
- Department of Psychology, University of Durham, Durham, United Kingdom.
| | | | - Pablo Gómez
- California State University San Bernardino, Palm Desert Campus, USA
| | - José Antonio Hinojosa
- Facultad de Lenguas y Educación, Universidad de Nebrija, Madrid, Spain; Instituto Pluridisciplinar, Universidad Complutense de Madrid, Spain; Departamento de Psicología Experimental, Procesos Psicológicos y Logopedia, Universidad Complutense de Madrid, Spain
| | | |
Collapse
|
22
|
Babenko VV, Yavna DV, Ermakov PN, Anokhina PV. Nonlocal contrast calculated by the second order visual mechanisms and its significance in identifying facial emotions. F1000Res 2023; 10:274. [PMID: 37767361 PMCID: PMC10521119 DOI: 10.12688/f1000research.28396.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/15/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Previously obtained results indicate that faces are / preattentively/ detected in the visual scene very fast, and information on facial expression is rapidly extracted at the lower levels of the visual system. At the same time different facial attributes make different contributions in facial expression recognition. However, it is known, among the preattentive mechanisms there are none that would be selective for certain facial features, such as eyes or mouth. The aim of our study was to identify a candidate for the role of such a mechanism. Our assumption was that the most informative areas of the image are those characterized by spatial heterogeneity, particularly with nonlocal contrast changes. These areas may be identified / in the human visual system/ by the second-order visual / mechanisms/ filters selective to contrast modulations of brightness gradients. Methods: We developed a software program imitating the operation of these / mechanisms/ filters and finding areas of contrast heterogeneity in the image. Using this program, we extracted areas with maximum, minimum and medium contrast modulation amplitudes from the initial face images, then we used these to make three variants of one and the same face. The faces were demonstrated to the observers along with other objects synthesized the same way. The participants had to identify faces and define facial emotional expressions. Results: It was found that the greater is the contrast modulation amplitude of the areas shaping the face, the more precisely the emotion is identified. Conclusions: The results suggest that areas with a greater increase in nonlocal contrast are more informative in facial images, and the second-order visual / mechanisms/ filters can claim the role of /filters/ elements that detect areas of interest, attract visual attention and are windows through which subsequent levels of visual processing receive valuable information.
Collapse
Affiliation(s)
- Vitaly V. Babenko
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Denis V. Yavna
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Pavel N. Ermakov
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Polina V. Anokhina
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| |
Collapse
|
23
|
Ramon C, Graichen U, Gargiulo P, Zanow F, Knösche TR, Haueisen J. Spatiotemporal phase slip patterns for visual evoked potentials, covert object naming tasks, and insight moments extracted from 256 channel EEG recordings. Front Integr Neurosci 2023; 17:1087976. [PMID: 37384237 PMCID: PMC10293627 DOI: 10.3389/fnint.2023.1087976] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Phase slips arise from state transitions of the coordinated activity of cortical neurons which can be extracted from the EEG data. The phase slip rates (PSRs) were studied from the high-density (256 channel) EEG data, sampled at 16.384 kHz, of five adult subjects during covert visual object naming tasks. Artifact-free data from 29 trials were averaged for each subject. The analysis was performed to look for phase slips in the theta (4-7 Hz), alpha (7-12 Hz), beta (12-30 Hz), and low gamma (30-49 Hz) bands. The phase was calculated with the Hilbert transform, then unwrapped and detrended to look for phase slip rates in a 1.0 ms wide stepping window with a step size of 0.06 ms. The spatiotemporal plots of the PSRs were made by using a montage layout of 256 equidistant electrode positions. The spatiotemporal profiles of EEG and PSRs during the stimulus and the first second of the post-stimulus period were examined in detail to study the visual evoked potentials and different stages of visual object recognition in the visual, language, and memory areas. It was found that the activity areas of PSRs were different as compared with EEG activity areas during the stimulus and post-stimulus periods. Different stages of the insight moments during the covert object naming tasks were examined from PSRs and it was found to be about 512 ± 21 ms for the 'Eureka' moment. Overall, these results indicate that information about the cortical phase transitions can be derived from the measured EEG data and can be used in a complementary fashion to study the cognitive behavior of the brain.
Collapse
Affiliation(s)
- Ceon Ramon
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Regional Epilepsy Center, Harborview Medical Center, University of Washington, Seattle, WA, United States
| | - Uwe Graichen
- Department of Biostatistics and Data Science, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Reykjavik University, Reykjavik, Iceland
- Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| | | | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Neurosciences, Leipzig, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| |
Collapse
|
24
|
Barborica A, Mindruta I, López-Madrona VJ, Alario FX, Trébuchon A, Donos C, Oane I, Pistol C, Mihai F, Bénar CG. Studying memory processes at different levels with simultaneous depth and surface EEG recordings. Front Hum Neurosci 2023; 17:1154038. [PMID: 37082152 PMCID: PMC10110965 DOI: 10.3389/fnhum.2023.1154038] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/06/2023] [Indexed: 04/07/2023] Open
Abstract
Investigating cognitive brain functions using non-invasive electrophysiology can be challenging due to the particularities of the task-related EEG activity, the depth of the activated brain areas, and the extent of the networks involved. Stereoelectroencephalographic (SEEG) investigations in patients with drug-resistant epilepsy offer an extraordinary opportunity to validate information derived from non-invasive recordings at macro-scales. The SEEG approach can provide brain activity with high spatial specificity during tasks that target specific cognitive processes (e.g., memory). Full validation is possible only when performing simultaneous scalp SEEG recordings, which allows recording signals in the exact same brain state. This is the approach we have taken in 12 subjects performing a visual memory task that requires the recognition of previously viewed objects. The intracranial signals on 965 contact pairs have been compared to 391 simultaneously recorded scalp signals at a regional and whole-brain level, using multivariate pattern analysis. The results show that the task conditions are best captured by intracranial sensors, despite the limited spatial coverage of SEEG electrodes, compared to the whole-brain non-invasive recordings. Applying beamformer source reconstruction or independent component analysis does not result in an improvement of the multivariate task decoding performance using surface sensor data. By analyzing a joint scalp and SEEG dataset, we investigated whether the two types of signals carry complementary information that might improve the machine-learning classifier performance. This joint analysis revealed that the results are driven by the modality exhibiting best individual performance, namely SEEG.
Collapse
Affiliation(s)
- Andrei Barborica
- Department of Physics, University of Bucharest, Bucharest, Romania
- *Correspondence: Andrei Barborica
| | - Ioana Mindruta
- Epilepsy Monitoring Unit, Department of Neurology, Emergency University Hospital Bucharest, Bucharest, Romania
- Department of Neurology, Medical Faculty, Carol Davila University of Medicine and Pharmacy Bucharest, Bucharest, Romania
| | | | | | - Agnès Trébuchon
- APHM, Timone Hospital, Epileptology and Cerebral Rhythmology, Marseille, France
- APHM, Timone Hospital, Functional and Stereotactic Neurosurgery, Marseille, France
| | - Cristian Donos
- Department of Physics, University of Bucharest, Bucharest, Romania
| | - Irina Oane
- Epilepsy Monitoring Unit, Department of Neurology, Emergency University Hospital Bucharest, Bucharest, Romania
| | | | - Felicia Mihai
- Department of Physics, University of Bucharest, Bucharest, Romania
| | - Christian G. Bénar
- Aix Marseille University, INSERM, INS, Institute of Neuroscience System, Marseille, France
- Christian G. Bénar
| |
Collapse
|
25
|
Quian Quiroga R. An integrative view of human hippocampal function: Differences with other species and capacity considerations. Hippocampus 2023; 33:616-634. [PMID: 36965048 DOI: 10.1002/hipo.23527] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/11/2023] [Accepted: 03/09/2023] [Indexed: 03/27/2023]
Abstract
We describe an integrative model that encodes associations between related concepts in the human hippocampal formation, constituting the skeleton of episodic memories. The model, based on partially overlapping assemblies of "concept cells," contrast markedly with the well-established notion of pattern separation, which relies on conjunctive, context dependent single neuron responses, instead of the invariant, context independent responses found in the human hippocampus. We argue that the model of partially overlapping assemblies is better suited to cope with memory capacity limitations, that the finding of different types of neurons and functions in this area is due to a flexible and temporary use of the extraordinary machinery of the hippocampus to deal with the task at hand, and that only information that is relevant and frequently revisited will consolidate into long-term hippocampal representations, using partially overlapping assemblies. Finally, we propose that concept cells are uniquely human and that they may constitute the neuronal underpinnings of cognitive abilities that are much further developed in humans compared to other species.
Collapse
Affiliation(s)
- Rodrigo Quian Quiroga
- Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
- Centre for Systems Neuroscience, University of Leicester, Leicester, UK
- Department of neurosurgery, clinical neuroscience center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
26
|
Jozwik KM, Kietzmann TC, Cichy RM, Kriegeskorte N, Mur M. Deep Neural Networks and Visuo-Semantic Models Explain Complementary Components of Human Ventral-Stream Representational Dynamics. J Neurosci 2023; 43:1731-1741. [PMID: 36759190 PMCID: PMC10010451 DOI: 10.1523/jneurosci.1424-22.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 11/08/2022] [Accepted: 12/20/2022] [Indexed: 02/11/2023] Open
Abstract
Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.
Collapse
Affiliation(s)
- Kamila M Jozwik
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
| | - Marieke Mur
- Department of Psychology, Western University, London, Ontario N6A 3K7, Canada
- Department of Computer Science, Western University, London, Ontario N6A 3K7, Canada
| |
Collapse
|
27
|
Xiao Y, Chou CC, Cosgrove GR, Crone NE, Stone S, Madsen JR, Reucroft I, Shih YC, Weisholtz D, Yu HY, Anderson WS, Kreiman G. Cross-task specificity and within-task invariance of cognitive control processes. Cell Rep 2023; 42:111919. [PMID: 36640346 PMCID: PMC9993332 DOI: 10.1016/j.celrep.2022.111919] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 08/09/2022] [Accepted: 12/12/2022] [Indexed: 01/11/2023] Open
Abstract
Cognitive control involves flexibly combining multiple sensory inputs with task-dependent goals during decision making. Several tasks involving conflicting sensory inputs and motor outputs have been proposed to examine cognitive control, including the Stroop, Flanker, and multi-source interference task. Because these tasks have been studied independently, it remains unclear whether the neural signatures of cognitive control reflect abstract control mechanisms or specific combinations of sensory and behavioral aspects of each task. To address these questions, we record invasive neurophysiological signals from 16 patients with pharmacologically intractable epilepsy and compare neural responses within and between tasks. Neural signals differ between incongruent and congruent conditions, showing strong modulation by conflicting task demands. These neural signals are mostly specific to each task, generalizing within a task but not across tasks. These results highlight the complex interplay between sensory inputs, motor outputs, and task demands underlying cognitive control processes.
Collapse
Affiliation(s)
| | - Chien-Chen Chou
- Department of Neurology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University College of Medicine, Taipei, Taiwan
| | | | | | - Scellig Stone
- Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Joseph R Madsen
- Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ian Reucroft
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Yen-Cheng Shih
- Department of Neurology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University College of Medicine, Taipei, Taiwan
| | - Daniel Weisholtz
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Hsiang-Yu Yu
- Department of Neurology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University College of Medicine, Taipei, Taiwan
| | | | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA.
| |
Collapse
|
28
|
Zhang M, Armendariz M, Xiao W, Rose O, Bendtz K, Livingstone M, Ponce C, Kreiman G. Look twice: A generalist computational model predicts return fixations across tasks and species. PLoS Comput Biol 2022; 18:e1010654. [PMID: 36413523 PMCID: PMC9681066 DOI: 10.1371/journal.pcbi.1010654] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 10/13/2022] [Indexed: 11/23/2022] Open
Abstract
Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations.
Collapse
Affiliation(s)
- Mengmi Zhang
- Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, United States of America
- CFAR and I2R, Agency for Science, Technology and Research, Singapore
| | - Marcelo Armendariz
- Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, United States of America
- Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium
| | - Will Xiao
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Olivia Rose
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Katarina Bendtz
- Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, United States of America
| | - Margaret Livingstone
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Carlos Ponce
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Gabriel Kreiman
- Boston Children’s Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
29
|
Ebrahiminia F, Cichy RM, Khaligh-Razavi SM. A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans. Front Neurosci 2022; 16:983602. [PMID: 36330341 PMCID: PMC9624066 DOI: 10.3389/fnins.2022.983602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/23/2022] [Indexed: 09/07/2024] Open
Abstract
Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis. In particular, we assessed whether this relation is affected when we change stimuli or introduce identity-preserving variations to them. For this, we recorded EEG and fMRI data separately from 21 healthy participants while participants viewed everyday objects in different viewing conditions, and then related the data to electrocorticogram (ECoG) data recorded for the same stimulus set from epileptic patients. The comparison of EEG and ECoG data showed that object category signals emerge swiftly in the visual system and can be detected by both EEG and ECoG at similar temporal delays after stimulus onset. The correlation between EEG and ECoG was reduced when object representations tolerant to changes in scale and orientation were considered. The comparison of fMRI and ECoG overall revealed a tighter relationship in occipital than in temporal regions, related to differences in fMRI signal-to-noise ratio. Together, our results reveal a complex relationship between fMRI, EEG, and ECoG signals at the level of population codes that critically depends on the time point after stimulus onset, the region investigated, and the visual contents used.
Collapse
Affiliation(s)
- Fatemeh Ebrahiminia
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | | | - Seyed-Mahdi Khaligh-Razavi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
| |
Collapse
|
30
|
Day-Cooney J, Cone JJ, Maunsell JHR. Perceptual Weighting of V1 Spikes Revealed by Optogenetic White Noise Stimulation. J Neurosci 2022; 42:3122-3132. [PMID: 35232760 PMCID: PMC8994541 DOI: 10.1523/jneurosci.1736-21.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 01/17/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022] Open
Abstract
During visually guided behaviors, mere hundreds of milliseconds can elapse between a sensory input and its associated behavioral response. How spikes occurring at different times are integrated to drive perception and action remains poorly understood. We delivered random trains of optogenetic stimulation (white noise) to excite inhibitory interneurons in V1 of mice of both sexes while they performed a visual detection task. We then performed a reverse correlation analysis on the optogenetic stimuli to generate a neuronal-behavioral kernel, an unbiased, temporally precise estimate of how suppression of V1 spiking at different moments around the onset of a visual stimulus affects detection of that stimulus. Electrophysiological recordings enabled us to capture the effects of optogenetic stimuli on V1 responsivity and revealed that the earliest stimulus-evoked spikes are preferentially weighted for guiding behavior. These data demonstrate that white noise optogenetic stimulation is a powerful tool for understanding how patterns of spiking in neuronal populations are decoded in generating perception and action.SIGNIFICANCE STATEMENT During visually guided actions, continuous chains of neurons connect our retinas to our motoneurons. To unravel circuit contributions to behavior, it is crucial to establish the relative functional position(s) that different neural structures occupy in processing and relaying the signals that support rapid, precise responses. To address this question, we randomly inhibited activity in mouse V1 throughout the stimulus-response cycle while the animals did many repetitions of a visual task. The period that led to impaired performance corresponded to the earliest stimulus-driven response in V1, with no effect of inhibition immediately before or during late stages of the stimulus-driven response. This approach offers experimenters a powerful method for uncovering the temporal weighting of spikes from stimulus to response.
Collapse
Affiliation(s)
- Julian Day-Cooney
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| | - Jackson J Cone
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| | - John H R Maunsell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, Illinois 60637
| |
Collapse
|
31
|
Rybář M, Daly I. Neural decoding of semantic concepts: A systematic literature review. J Neural Eng 2022; 19. [PMID: 35344941 DOI: 10.1088/1741-2552/ac619a] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/27/2022] [Indexed: 11/12/2022]
Abstract
Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning to explore tools and methods for semantic decoding: identifying which semantic concepts an individual is focused on at a given moment in time from recordings of their neural activity. In this paper we review the current literature on semantic neural decoding. Approach We conducted this review according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. Specifically, we assess the eligibility of published peer-reviewed reports via a search of PubMed and Google Scholar. We identify a total of 74 studies in which semantic neural decoding is used to attempt to identify individual semantic concepts from neural activity. Results Our review reveals how modern neuroscientific tools have been developed to allow decoding of individual concepts from a range of neuroimaging modalities. We discuss specific neuroimaging methods, experimental designs, and machine learning pipelines that are employed to aid the decoding of semantic concepts. We quantify the efficacy of semantic decoders by measuring information transfer rates. We also discuss current challenges presented by this research area and present some possible solutions. Finally, we discuss some possible emerging and speculative future directions for this research area. Significance Semantic decoding is a rapidly growing area of research. However, despite its increasingly widespread popularity and use in neuroscientific research this is the first literature review focusing on this topic across neuroimaging modalities and with a focus on quantifying the efficacy of semantic decoders.
Collapse
Affiliation(s)
- Milan Rybář
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ian Daly
- University of Essex, School of Computer Science and Electronic Engineering, Wivenhoe Park, Colchester, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
32
|
Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. Commun Biol 2022; 5:214. [PMID: 35304588 PMCID: PMC8933408 DOI: 10.1038/s42003-022-03137-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 02/08/2022] [Indexed: 12/04/2022] Open
Abstract
Neural representations of visual perception are affected by mental imagery and attention. Although attention is known to modulate neural representations, it is unknown how imagery changes neural representations when imagined and perceived images semantically conflict. We hypothesized that imagining an image would activate a neural representation during its perception even while watching a conflicting image. To test this hypothesis, we developed a closed-loop system to show images inferred from electrocorticograms using a visual semantic space. The successful control of the feedback images demonstrated that the semantic vector inferred from electrocorticograms became closer to the vector of the imagined category, even while watching images from different categories. Moreover, modulation of the inferred vectors by mental imagery depended asymmetrically on the perceived and imagined categories. Shared neural representation between mental imagery and perception was still activated by the imagery under semantically conflicting perceptions depending on the semantic category. In this study, intracranial EEG recordings show that neural representations of imagined images can still be present in humans even when they are shown conflicting images.
Collapse
|
33
|
Karimi-Rouzbahani H, Woolgar A. When the Whole Is Less Than the Sum of Its Parts: Maximum Object Category Information and Behavioral Prediction in Multiscale Activation Patterns. Front Neurosci 2022; 16:825746. [PMID: 35310090 PMCID: PMC8924472 DOI: 10.3389/fnins.2022.825746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/24/2022] [Indexed: 11/19/2022] Open
Abstract
Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG.
Collapse
Affiliation(s)
- Hamid Karimi-Rouzbahani
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
- Department of Computing, Macquarie University, Sydney, NSW, Australia
| | - Alexandra Woolgar
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
34
|
Decramer T, Premereur E, Zhu Q, Van Paesschen W, van Loon J, Vanduffel W, Taubert J, Janssen P, Theys T. Single-Unit Recordings Reveal the Selectivity of a Human Face Area. J Neurosci 2021; 41:9340-9349. [PMID: 34732521 PMCID: PMC8580152 DOI: 10.1523/jneurosci.0349-21.2021] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 08/24/2021] [Accepted: 08/26/2021] [Indexed: 11/21/2022] Open
Abstract
The exquisite capacity of primates to detect and recognize faces is crucial for social interactions. Although disentangling the neural basis of human face recognition remains a key goal in neuroscience, direct evidence at the single-neuron level is limited. We recorded from face-selective neurons in human visual cortex in a region characterized by functional magnetic resonance imaging (fMRI) activations for faces compared with objects. The majority of visually responsive neurons in this fMRI activation showed strong selectivity at short latencies for faces compared with objects. Feature-scrambled faces and face-like objects could also drive these neurons, suggesting that this region is not tightly tuned to the visual attributes that typically define whole human faces. These single-cell recordings within the human face processing system provide vital experimental evidence linking previous imaging studies in humans and invasive studies in animal models.SIGNIFICANCE STATEMENT We present the first recordings of face-selective neurons in or near an fMRI-defined patch in human visual cortex. Our unbiased multielectrode array recordings (i.e., no selection of neurons based on a search strategy) confirmed the validity of the BOLD contrast (faces-objects) in humans, a finding with implications for all human imaging studies. By presenting faces, feature-scrambled faces, and face-pareidolia (perceiving faces in inanimate objects) stimuli, we demonstrate that neurons at this level of the visual hierarchy are broadly tuned to the features of a face, independent of spatial configuration and low-level visual attributes.
Collapse
Affiliation(s)
- Thomas Decramer
- Research Group Experimental Neurosurgery and Neuroanatomy, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
- Departments of Neurosurgery and
- Laboratory for Neuro- and Psychophysiology, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
| | - Elsie Premereur
- Laboratory for Neuro- and Psychophysiology, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
| | - Qi Zhu
- Laboratory for Neuro- and Psychophysiology, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
| | - Wim Van Paesschen
- Neurology, University Hospitals Leuven, 3000 Leuven, Belgium
- Laboratory for Epilepsy Research, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Johannes van Loon
- Research Group Experimental Neurosurgery and Neuroanatomy, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
- Departments of Neurosurgery and
| | - Wim Vanduffel
- Laboratory for Neuro- and Psychophysiology, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
| | - Jessica Taubert
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20892
| | - Peter Janssen
- Laboratory for Neuro- and Psychophysiology, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
| | - Tom Theys
- Research Group Experimental Neurosurgery and Neuroanatomy, Katholieke Universiteit Leuven, Leuven Brain Institute, 3000 Leuven, Belgium
- Departments of Neurosurgery and
| |
Collapse
|
35
|
Yao Y, Wu Y, Xu T, Chen F. Mining Temporal Dynamics With Support Vector Machine for Predicting the Neural Fate of Target in Attentional Blink. Front Syst Neurosci 2021; 15:734660. [PMID: 34776884 PMCID: PMC8589014 DOI: 10.3389/fnsys.2021.734660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 10/04/2021] [Indexed: 12/04/2022] Open
Abstract
Our brains do not mechanically process incoming stimuli; in contrast, the physiological state of the brain preceding stimuli has substantial consequences for subsequent behavior and neural processing. Although previous studies have acknowledged the importance of this top-down process, it was only recently that a growing interest was gained in exploring the underlying neural mechanism quantitatively. By utilizing the attentional blink (AB) effect, this study is aimed to identify the neural mechanism of brain states preceding T2 and predict its behavioral performance. Interarea phase synchronization and its role in prediction were explored using the phase-locking value and support vector machine classifiers. Our results showed that the phase coupling in alpha and beta frequency bands pre-T1 and during the T1-T2 interval could predict the detection of T2 in lag 3 with high accuracy. These findings indicated the important role of brain state before stimuli appear in predicting the behavioral performance in AB, thus, supporting the attention control theories.
Collapse
Affiliation(s)
- Yuan Yao
- Bio-X Laboratory, Department of Physics, Zhejiang University, Hangzhou, China
- Department of Education, Suzhou University of Science and Technology, Suzhou, China
| | - Yunying Wu
- Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Tianyong Xu
- Bio-X Laboratory, Department of Physics, Zhejiang University, Hangzhou, China
| | - Feiyan Chen
- Bio-X Laboratory, Department of Physics, Zhejiang University, Hangzhou, China
| |
Collapse
|
36
|
Wang J, Tao A, Anderson WS, Madsen JR, Kreiman G. Mesoscopic physiological interactions in the human brain reveal small-world properties. Cell Rep 2021; 36:109585. [PMID: 34433053 PMCID: PMC8457376 DOI: 10.1016/j.celrep.2021.109585] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 06/22/2021] [Accepted: 07/28/2021] [Indexed: 11/23/2022] Open
Abstract
Cognition depends on rapid and robust communication between neural circuits spanning different brain areas. We investigated the mesoscopic network of cortico-cortical interactions in the human brain in an extensive dataset consisting of 6,024 h of intracranial field potential recordings from 4,142 electrodes in 48 subjects. We evaluated communication between brain areas at the network level across different frequency bands. The interaction networks were validated against known anatomical measurements and neurophysiological interactions in humans and monkeys. The resulting human brain interactome is characterized by a broad and spatially specific, dynamic, and extensive network. The physiological interactome reveals small-world properties, which we conjecture might facilitate efficient and reliable information transmission. The interaction dynamics correlate with the brain sleep/awake state. These results constitute initial steps toward understanding how the interactome orchestrates cortical communication and provide a reference for future efforts assessing how dysfunctional interactions may lead to mental disorders. Cognition relies on rapid and robust communication between brain areas. Wang et al. leverage multi-day intracranial field potential recordings to characterize the human mesoscopic functional interactome. They validated the methods using monkey anatomical and physiological data. The human interactome reveals small-world properties and is modulated by sleep versus awake state.
Collapse
Affiliation(s)
| | | | | | | | - Gabriel Kreiman
- Harvard Medical School, Boston, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA.
| |
Collapse
|
37
|
Wischnewski M, Peelen MV. Causal neural mechanisms of context-based object recognition. eLife 2021; 10:69736. [PMID: 34374647 PMCID: PMC8354632 DOI: 10.7554/elife.69736] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 07/26/2021] [Indexed: 12/05/2022] Open
Abstract
Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.
Collapse
Affiliation(s)
- Miles Wischnewski
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.,Department of Biomedical Engineering, University of Minnesota, Minneapolis, United States
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
38
|
Kalafatis C, Modarres MH, Apostolou P, Marefat H, Khanbagi M, Karimi H, Vahabi Z, Aarsland D, Khaligh-Razavi SM. Validity and Cultural Generalisability of a 5-Minute AI-Based, Computerised Cognitive Assessment in Mild Cognitive Impairment and Alzheimer's Dementia. Front Psychiatry 2021; 12:706695. [PMID: 34366938 PMCID: PMC8339427 DOI: 10.3389/fpsyt.2021.706695] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 06/17/2021] [Indexed: 11/13/2022] Open
Abstract
Introduction: Early detection and monitoring of mild cognitive impairment (MCI) and Alzheimer's Disease (AD) patients are key to tackling dementia and providing benefits to patients, caregivers, healthcare providers and society. We developed the Integrated Cognitive Assessment (ICA); a 5-min, language independent computerised cognitive test that employs an Artificial Intelligence (AI) model to improve its accuracy in detecting cognitive impairment. In this study, we aimed to evaluate the generalisability of the ICA in detecting cognitive impairment in MCI and mild AD patients. Methods: We studied the ICA in 230 participants. 95 healthy volunteers, 80 MCI, and 55 mild AD participants completed the ICA, Montreal Cognitive Assessment (MoCA) and Addenbrooke's Cognitive Examination (ACE) cognitive tests. Results: The ICA demonstrated convergent validity with MoCA (Pearson r=0.58, p<0.0001) and ACE (r=0.62, p<0.0001). The ICA AI model was able to detect cognitive impairment with an AUC of 81% for MCI patients, and 88% for mild AD patients. The AI model demonstrated improved performance with increased training data and showed generalisability in performance from one population to another. The ICA correlation of 0.17 (p = 0.01) with education years is considerably smaller than that of MoCA (r = 0.34, p < 0.0001) and ACE (r = 0.41, p < 0.0001) which displayed significant correlations. In a separate study the ICA demonstrated no significant practise effect over the duration of the study. Discussion: The ICA can support clinicians by aiding accurate diagnosis of MCI and AD and is appropriate for large-scale screening of cognitive impairment. The ICA is unbiased by differences in language, culture, and education.
Collapse
Affiliation(s)
- Chris Kalafatis
- Cognetivity Ltd, London, United Kingdom
- South London & Maudsley NHS Foundation Trust, London, United Kingdom
- Department of Old Age Psychiatry, King's College London, London, United Kingdom
| | | | | | - Haniye Marefat
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Mahdiyeh Khanbagi
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Hamed Karimi
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Zahra Vahabi
- Tehran University of Medical Sciences, Tehran, Iran
| | - Dag Aarsland
- Department of Old Age Psychiatry, King's College London, London, United Kingdom
| | - Seyed-Mahdi Khaligh-Razavi
- Cognetivity Ltd, London, United Kingdom
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| |
Collapse
|
39
|
Gilis J, Vitting-Seerup K, Van den Berge K, Clement L. satuRn: Scalable analysis of differential transcript usage for bulk and single-cell RNA-sequencing applications. F1000Res 2021; 10:374. [PMID: 36762203 PMCID: PMC9892655 DOI: 10.12688/f1000research.51749.2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/26/2022] [Indexed: 11/20/2022] Open
Abstract
Alternative splicing produces multiple functional transcripts from a single gene. Dysregulation of splicing is known to be associated with disease and as a hallmark of cancer. Existing tools for differential transcript usage (DTU) analysis either lack in performance, cannot account for complex experimental designs or do not scale to massive single-cell transcriptome sequencing (scRNA-seq) datasets. We introduce satuRn, a fast and flexible quasi-binomial generalized linear modelling framework that is on par with the best performing DTU methods from the bulk RNA-seq realm, while providing good false discovery rate control, addressing complex experimental designs, and scaling to scRNA-seq applications.
Collapse
Affiliation(s)
- Jeroen Gilis
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Data Mining and Modeling for Biomedicine, VIB Flemish Institute for Biotechnology, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
| | - Kristoffer Vitting-Seerup
- Department of Biology, Kobenhavns Universitet, Copenhagen, 2200, Denmark
- Biotech Research and Innovation Centre (BRIC), Kobenhavns Universitet, Copenhagen, 2200, Denmark
- Danish Cancer Society Research Center, Copenhagen, 2100, Denmark
- Department of Health Technology, Danish Technical University, Kongens Lyngby, 2800, Denmark
| | - Koen Van den Berge
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
- Department of Statistics, University of California, Berkeley, Berkeley, California, USA
| | - Lieven Clement
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
| |
Collapse
|
40
|
Gilis J, Vitting-Seerup K, Van den Berge K, Clement L. satuRn: Scalable analysis of differential transcript usage for bulk and single-cell RNA-sequencing applications. F1000Res 2021; 10:374. [PMID: 36762203 PMCID: PMC9892655 DOI: 10.12688/f1000research.51749.1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/23/2021] [Indexed: 10/04/2023] Open
Abstract
Alternative splicing produces multiple functional transcripts from a single gene. Dysregulation of splicing is known to be associated with disease and as a hallmark of cancer. Existing tools for differential transcript usage (DTU) analysis either lack in performance, cannot account for complex experimental designs or do not scale to massive scRNA-seq data. We introduce satuRn, a fast and flexible quasi-binomial generalized linear modelling framework that is on par with the best performing DTU methods from the bulk RNA-seq realm, while providing good false discovery rate control, addressing complex experimental designs and scaling to scRNA-seq applications.
Collapse
Affiliation(s)
- Jeroen Gilis
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Data Mining and Modeling for Biomedicine, VIB Flemish Institute for Biotechnology, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
| | - Kristoffer Vitting-Seerup
- Department of Biology, Kobenhavns Universitet, Copenhagen, 2200, Denmark
- Biotech Research and Innovation Centre (BRIC), Kobenhavns Universitet, Copenhagen, 2200, Denmark
- Danish Cancer Society Research Center, Copenhagen, 2100, Denmark
- Department of Health Technology, Danish Technical University, Kongens Lyngby, 2800, Denmark
| | - Koen Van den Berge
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
- Department of Statistics, University of California, Berkeley, Berkeley, California, USA
| | - Lieven Clement
- Applied Mathematics, Computer science and Statistics, Ghent University, Ghent, 9000, Belgium
- Bioinformatics Institute, Ghent University, Ghent, 9000, Belgium
| |
Collapse
|
41
|
Moleirinho S, Whalen AJ, Fried SI, Pezaris JS. The impact of synchronous versus asynchronous electrical stimulation in artificial vision. J Neural Eng 2021; 18:10.1088/1741-2552/abecf1. [PMID: 33900206 PMCID: PMC11565581 DOI: 10.1088/1741-2552/abecf1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Visual prosthesis devices designed to restore sight to the blind have been under development in the laboratory for several decades. Clinical translation continues to be challenging, due in part to gaps in our understanding of critical parameters such as how phosphenes, the electrically-generated pixels of artificial vision, can be combined to form images. In this review we explore the effects that synchronous and asynchronous electrical stimulation across multiple electrodes have in evoking phosphenes. Understanding how electrical patterns influence phosphene generation to control object binding and perception of visual form is fundamental to creation of a clinically successful prosthesis.
Collapse
Affiliation(s)
- Susana Moleirinho
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurosurgery, Harvard Medical School Boston, MA, United States of America
| | - Andrew J Whalen
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurosurgery, Harvard Medical School Boston, MA, United States of America
| | - Shelley I Fried
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurosurgery, Harvard Medical School Boston, MA, United States of America
- Boston VA Healthcare System, Boston, MA, United States of America
| | - John S Pezaris
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurosurgery, Harvard Medical School Boston, MA, United States of America
| |
Collapse
|
42
|
Poncet M, Fabre‐Thorpe M, Chakravarthi R. A simple rule to describe interactions between visual categories. Eur J Neurosci 2020; 52:4639-4666. [DOI: 10.1111/ejn.14890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 06/12/2020] [Accepted: 06/24/2020] [Indexed: 11/27/2022]
Affiliation(s)
- Marlene Poncet
- CerCo Université de ToulouseCNRSUPS Toulouse France
- School of Psychology University of St Andrews St Andrews UK
| | | | | |
Collapse
|
43
|
Vlcek K, Fajnerova I, Nekovarova T, Hejtmanek L, Janca R, Jezdik P, Kalina A, Tomasek M, Krsek P, Hammer J, Marusic P. Mapping the Scene and Object Processing Networks by Intracranial EEG. Front Hum Neurosci 2020; 14:561399. [PMID: 33192393 PMCID: PMC7581859 DOI: 10.3389/fnhum.2020.561399] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 09/02/2020] [Indexed: 11/13/2022] Open
Abstract
Human perception and cognition are based predominantly on visual information processing. Much of the information regarding neuronal correlates of visual processing has been derived from functional imaging studies, which have identified a variety of brain areas contributing to visual analysis, recognition, and processing of objects and scenes. However, only two of these areas, namely the parahippocampal place area (PPA) and the lateral occipital complex (LOC), were verified and further characterized by intracranial electroencephalogram (iEEG). iEEG is a unique measurement technique that samples a local neuronal population with high temporal and anatomical resolution. In the present study, we aimed to expand on previous reports and examine brain activity for selectivity of scenes and objects in the broadband high-gamma frequency range (50–150 Hz). We collected iEEG data from 27 epileptic patients while they watched a series of images, containing objects and scenes, and we identified 375 bipolar channels responding to at least one of these two categories. Using K-means clustering, we delineated their brain localization. In addition to the two areas described previously, we detected significant responses in two other scene-selective areas, not yet reported by any electrophysiological studies; namely the occipital place area (OPA) and the retrosplenial complex. Moreover, using iEEG we revealed a much broader network underlying visual processing than that described to date, using specialized functional imaging experimental designs. Here, we report the selective brain areas for scene processing include the posterior collateral sulcus and the anterior temporal region, which were already shown to be related to scene novelty and landmark naming. The object-selective responses appeared in the parietal, frontal, and temporal regions connected with tool use and object recognition. The temporal analyses specified the time course of the category selectivity through the dorsal and ventral visual streams. The receiver operating characteristic analyses identified the PPA and the fusiform portion of the LOC as being the most selective for scenes and objects, respectively. Our findings represent a valuable overview of visual processing selectivity for scenes and objects based on iEEG analyses and thus, contribute to a better understanding of visual processing in the human brain.
Collapse
Affiliation(s)
- Kamil Vlcek
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| | - Iveta Fajnerova
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,National Institute of Mental Health, Prague, Czechia
| | - Tereza Nekovarova
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,National Institute of Mental Health, Prague, Czechia
| | - Lukas Hejtmanek
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| | - Radek Janca
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia
| | - Petr Jezdik
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia
| | - Adam Kalina
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Martin Tomasek
- Department of Neurosurgery, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Pavel Krsek
- Department of Paediatric Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Jiri Hammer
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Petr Marusic
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| |
Collapse
|
44
|
García AM, Hesse E, Birba A, Adolfi F, Mikulan E, Caro MM, Petroni A, Bekinschtein TA, del Carmen García M, Silva W, Ciraolo C, Vaucheret E, Sedeño L, Ibáñez A. Time to Face Language: Embodied Mechanisms Underpin the Inception of Face-Related Meanings in the Human Brain. Cereb Cortex 2020; 30:6051-6068. [PMID: 32577713 PMCID: PMC7673477 DOI: 10.1093/cercor/bhaa178] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2019] [Revised: 04/21/2020] [Accepted: 06/02/2020] [Indexed: 12/18/2022] Open
Abstract
In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.
Collapse
Affiliation(s)
- Adolfo M García
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
- Faculty of Education, National University of Cuyo (UNCuyo), MM5502GKA Mendoza, Argentina
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, 9170020 Santiago, Chile
- Global Brain Health Institute, University of California, CA 94158 San Francisco, USA
| | - Eugenia Hesse
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustina Birba
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Federico Adolfi
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Ezequiel Mikulan
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20122 Milan, Italy
| | - Miguel Martorell Caro
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustín Petroni
- Instituto de Ingeniería Biomédica, Facultad de Ingeniería, Universidad de Buenos Aires, C1063ACV Buenos Aires, Argentina
- Laboratorio de Inteligencia Artificial Aplicada, Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, ICC-CONICET, C1063ACV Buenos Aires, Argentina
| | | | - María del Carmen García
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Walter Silva
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Carlos Ciraolo
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Esteban Vaucheret
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Lucas Sedeño
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustín Ibáñez
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
- Global Brain Health Institute, University of California, CA 94158 San Francisco, USA
- Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Universidad Adolfo Ibáñez, 8320000, Santiago, Chile
- Universidad Autónoma del Caribe, 080003, Barranquilla, Colombia
| |
Collapse
|
45
|
Ter Wal M, Platonov A, Cardellicchio P, Pelliccia V, LoRusso G, Sartori I, Avanzini P, Orban GA, Tiesinga PHE. Human stereoEEG recordings reveal network dynamics of decision-making in a rule-switching task. Nat Commun 2020; 11:3075. [PMID: 32555174 PMCID: PMC7300004 DOI: 10.1038/s41467-020-16854-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 05/26/2020] [Indexed: 01/17/2023] Open
Abstract
The processing steps that lead up to a decision, i.e., the transformation of sensory evidence into motor output, are not fully understood. Here, we combine stereoEEG recordings from the human cortex, with single-lead and time-resolved decoding, using a wide range of temporal frequencies, to characterize decision processing during a rule-switching task. Our data reveal the contribution of rostral inferior parietal lobule (IPL) regions, in particular PFt, and the parietal opercular regions in decision processing and demonstrate that the network representing the decision is common to both task rules. We reconstruct the sequence in which regions engage in decision processing on single trials, thereby providing a detailed picture of the network dynamics involved in decision-making. The reconstructed timeline suggests that the supramarginal gyrus in IPL links decision regions in prefrontal cortex with premotor regions, where the motor plan for the response is elaborated.
Collapse
Affiliation(s)
- Marije Ter Wal
- Department of Neuroinformatics, Donders Institute, Radboud University, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands.
- School of Psychology, University of Birmingham, Edgbaston, B15 2TT, UK.
| | - Artem Platonov
- Department of Medicine and Surgery, University of Parma, Via Volturno 39E, 43125, Parma, Italy
| | - Pasquale Cardellicchio
- Department of Medicine and Surgery, University of Parma, Via Volturno 39E, 43125, Parma, Italy
| | - Veronica Pelliccia
- Claudio Munari Center for Epilepsy Surgery, Niguarda Hospital, Ospedale Ca'Granda Niguarda, Piazza dell'Ospedale Maggiore, 3, 20162, Milan, Italy
| | - Giorgio LoRusso
- Claudio Munari Center for Epilepsy Surgery, Niguarda Hospital, Ospedale Ca'Granda Niguarda, Piazza dell'Ospedale Maggiore, 3, 20162, Milan, Italy
| | - Ivana Sartori
- Claudio Munari Center for Epilepsy Surgery, Niguarda Hospital, Ospedale Ca'Granda Niguarda, Piazza dell'Ospedale Maggiore, 3, 20162, Milan, Italy
| | - Pietro Avanzini
- Institute of Neuroscience, CNR, via Volturno 39E, 43125, Parma, Italy
| | - Guy A Orban
- Department of Medicine and Surgery, University of Parma, Via Volturno 39E, 43125, Parma, Italy
| | - Paul H E Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands
| |
Collapse
|
46
|
Xiao W, Kreiman G. XDream: Finding preferred stimuli for visual neurons using generative networks and gradient-free optimization. PLoS Comput Biol 2020; 16:e1007973. [PMID: 32542056 PMCID: PMC7316361 DOI: 10.1371/journal.pcbi.1007973] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 06/25/2020] [Accepted: 05/21/2020] [Indexed: 11/23/2022] Open
Abstract
A longstanding question in sensory neuroscience is what types of stimuli drive neurons to fire. The characterization of effective stimuli has traditionally been based on a combination of intuition, insights from previous studies, and luck. A new method termed XDream (EXtending DeepDream with real-time evolution for activation maximization) combined a generative neural network and a genetic algorithm in a closed loop to create strong stimuli for neurons in the macaque visual cortex. Here we extensively and systematically evaluate the performance of XDream. We use ConvNet units as in silico models of neurons, enabling experiments that would be prohibitive with biological neurons. We evaluated how the method compares to brute-force search, and how well the method generalizes to different neurons and processing stages. We also explored design and parameter choices. XDream can efficiently find preferred features for visual units without any prior knowledge about them. XDream extrapolates to different layers, architectures, and developmental regimes, performing better than brute-force search, and often better than exhaustive sampling of >1 million images. Furthermore, XDream is robust to choices of multiple image generators, optimization algorithms, and hyperparameters, suggesting that its performance is locally near-optimal. Lastly, we found no significant advantage to problem-specific parameter tuning. These results establish expectations and provide practical recommendations for using XDream to investigate neural coding in biological preparations. Overall, XDream is an efficient, general, and robust algorithm for uncovering neuronal tuning preferences using a vast and diverse stimulus space. XDream is implemented in Python, released under the MIT License, and works on Linux, Windows, and MacOS.
Collapse
Affiliation(s)
- Will Xiao
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, Boston, Massachusetts, United States of America
- * E-mail:
| | - Gabriel Kreiman
- Center for Brains, Minds, and Machines, Boston, Massachusetts, United States of America
- Department of Ophthalmology, Boston Children’s Hospital, Boston, Massachusetts, United States of America
| |
Collapse
|
47
|
Retter TL, Jiang F, Webster MA, Rossion B. All-or-none face categorization in the human brain. Neuroimage 2020; 213:116685. [PMID: 32119982 PMCID: PMC7339021 DOI: 10.1016/j.neuroimage.2020.116685] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 02/24/2020] [Indexed: 12/31/2022] Open
Abstract
Visual categorization is integral for our interaction with the natural environment. In this process, similar selective responses are produced to a class of variable visual inputs. Whether categorization is supported by partial (graded) or absolute (all-or-none) neural responses in high-level human brain regions is largely unknown. We address this issue with a novel frequency-sweep paradigm probing the evolution of face categorization responses between the minimal and optimal stimulus presentation times. In a first experiment, natural images of variable non-face objects were progressively swept from 120 to 3 Hz (8.33-333 ms duration) in rapid serial visual presentation sequences. Widely variable face exemplars appeared every 1 s, enabling an implicit frequency-tagged face-categorization electroencephalographic (EEG) response at 1 Hz. Face-categorization activity emerged with stimulus durations as brief as 17 ms (17-83 ms across individual participants) but was significant with 33 ms durations at the group level. The face categorization response amplitude increased until 83 ms stimulus duration (12 Hz), implying graded categorization responses. In a second EEG experiment, faces appeared non-periodically throughout such sequences at fixed presentation rates, while participants explicitly categorized faces. A strong correlation between response amplitude and behavioral accuracy across frequency rates suggested that dilution from missed categorizations, rather than a decreased response to each face stimulus, accounted for the graded categorization responses as found in Experiment 1. This was supported by (1) the absence of neural responses to faces that participants failed to categorize explicitly in Experiment 2 and (2) equivalent amplitudes and spatio-temporal signatures of neural responses to behaviorally categorized faces across presentation rates. Overall, these observations provide original evidence that high-level visual categorization of faces, starting at about 100 ms following stimulus onset in the human brain, is variable across observers tested under tight temporal constraints, but occurs in an all-or-none fashion.
Collapse
Affiliation(s)
- Talia L Retter
- Psychological Sciences Research Institute, Institute of Neuroscience, University of Louvain, Belgium; Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA.
| | - Fang Jiang
- Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA
| | - Michael A Webster
- Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA
| | - Bruno Rossion
- Psychological Sciences Research Institute, Institute of Neuroscience, University of Louvain, Belgium; Université de Lorraine, CNRS, CRAN - UMR 7039, F-54000, Nancy, France; CHRU-Nancy, Service de Neurologie, F-54000, Nancy, France
| |
Collapse
|
48
|
Khaligh-Razavi SM, Sadeghi M, Khanbagi M, Kalafatis C, Nabavi SM. A self-administered, artificial intelligence (AI) platform for cognitive assessment in multiple sclerosis (MS). BMC Neurol 2020; 20:193. [PMID: 32423386 PMCID: PMC7236354 DOI: 10.1186/s12883-020-01736-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 04/20/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Cognitive impairment is common in patients with multiple sclerosis (MS). Accurate and repeatable measures of cognition have the potential to be used as markers of disease activity. METHODS We developed a 5-min computerized test to measure cognitive dysfunction in patients with MS. The proposed test - named the Integrated Cognitive Assessment (ICA) - is self-administered and language-independent. Ninety-one MS patients and 83 healthy controls (HC) took part in Substudy 1, in which each participant took the ICA test and the Brief International Cognitive Assessment for MS (BICAMS). We assessed ICA's test-retest reliability, its correlation with BICAMS, its sensitivity to discriminate patients with MS from the HC group, and its accuracy in detecting cognitive dysfunction. In Substudy 2, we recruited 48 MS patients, 38 of which had received an 8-week physical and cognitive rehabilitation programme and 10 MS patients who did not. We examined the association between the level of serum neurofilament light (NfL) in these patients and their ICA scores and Symbol Digit Modalities Test (SDMT) scores pre- and post-rehabilitation. RESULTS The ICA demonstrated excellent test-retest reliability (r = 0.94), with no learning bias, and showed a high level of convergent validity with BICAMS. The ICA was sensitive in discriminating the MS patients from the HC group, and demonstrated high accuracy (AUC = 95%) in discriminating cognitively normal from cognitively impaired participants. Additionally, we found a strong association (r = - 0.79) between ICA score and the level of NfL in MS patients before and after rehabilitation. CONCLUSIONS The ICA has the potential to be used as a digital marker of cognitive impairment and to monitor response to therapeutic interventions. In comparison to standard cognitive tools for MS, the ICA is shorter in duration, does not show a learning bias, and is independent of language.
Collapse
Affiliation(s)
- Seyed-Mahdi Khaligh-Razavi
- Cognetivity Ltd, London, UK. .,Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran.
| | | | - Mahdiyeh Khanbagi
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Chris Kalafatis
- Cognetivity Ltd, London, UK.,South London & Maudsley NHS Foundation Trust, London, UK.,Department of Old Age Psychiatry, King's College London, London, UK
| | - Seyed Massood Nabavi
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| |
Collapse
|
49
|
Identifying task-relevant spectral signatures of perceptual categorization in the human cortex. Sci Rep 2020; 10:7870. [PMID: 32398733 PMCID: PMC7217881 DOI: 10.1038/s41598-020-64243-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 03/11/2020] [Indexed: 11/26/2022] Open
Abstract
Human brain has developed mechanisms to efficiently decode sensory information according to perceptual categories of high prevalence in the environment, such as faces, symbols, objects. Neural activity produced within localized brain networks has been associated with the process that integrates both sensory bottom-up and cognitive top-down information processing. Yet, how specifically the different types and components of neural responses reflect the local networks’ selectivity for categorical information processing is still unknown. In this work we train Random Forest classification models to decode eight perceptual categories from broad spectrum of human intracranial signals (4–150 Hz, 100 subjects) obtained during a visual perception task. We then analyze which of the spectral features the algorithm deemed relevant to the perceptual decoding and gain the insights into which parts of the recorded activity are actually characteristic of the visual categorization process in the human brain. We show that network selectivity for a single or multiple categories in sensory and non-sensory cortices is related to specific patterns of power increases and decreases in both low (4–50 Hz) and high (50–150 Hz) frequency bands. By focusing on task-relevant neural activity and separating it into dissociated anatomical and spectrotemporal groups we uncover spectral signatures that characterize neural mechanisms of visual category perception in human brain that have not yet been reported in the literature.
Collapse
|
50
|
Abstract
Intracranial electroencephalography (iEEG) is measured from electrodes placed in or on the brain. These measurements have an excellent signal-to-noise ratio and iEEG signals have often been used to decode brain activity or drive brain-computer interfaces (BCIs). iEEG recordings are typically done for seizure monitoring in epilepsy patients who have these electrodes placed for a clinical purpose: to localize both brain regions that are essential for function and others where seizures start. Brain regions not involved in epilepsy are thought to function normally and provide a unique opportunity to learn about human neurophysiology. Intracranial electrodes measure the aggregate activity of large neuronal populations and recorded signals contain many features. Different features are extracted by analyzing these signals in the time and frequency domain. The time domain may reveal an evoked potential at a particular time after the onset of an event. Decomposition into the frequency domain may show narrowband peaks in the spectrum at specific frequencies or broadband signal changes that span a wide range of frequencies. Broadband power increases are generally observed when a brain region is active while most other features are highly specific to brain regions, inputs, and tasks. Here we describe the spatiotemporal dynamics of several iEEG signals that have often been used to decode brain activity and drive BCIs.
Collapse
|