1
|
Fang W, Jiang X, Chen J, Zhang C, Wang L. Oscillatory control over representational geometry of sequence working memory in macaque frontal cortex. Curr Biol 2025; 35:1495-1507.e5. [PMID: 40086442 DOI: 10.1016/j.cub.2025.02.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Revised: 01/27/2025] [Accepted: 02/17/2025] [Indexed: 03/16/2025]
Abstract
To process sequential streams of information, e.g., language, the brain must encode multiple items in sequence working memory (SWM) according to their ordinal relationship. While the geometry of neural states could represent sequential events in the frontal cortex, the control mechanism over these neural states remains unclear. Using high-throughput electrophysiology recording in the macaque frontal cortex, we observed widespread theta responses after each stimulus entry. Crucially, by applying targeted dimensionality reduction to extract task-relevant neural subspaces from both local field potential (LFP) and spike data, we found that theta power transiently encoded each sequentially presented stimulus regardless of its order. At the same time, theta-spike interaction was rank-selectively associated with memory subspaces, thereby potentially supporting the binding of items to appropriate ranks. Furthermore, this putative theta control can generalize to length-variable and error sequences, predicting behavior. Thus, decomposed entry/rank-WM subspaces and theta-spike interactions may underlie the control of SWM.
Collapse
Affiliation(s)
- Wen Fang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Xi Jiang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jingwen Chen
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Cong Zhang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; Shanghai Academy of Natural Sciences (SANS), Fudan University, Shanghai 200031, China.
| |
Collapse
|
2
|
Kazanina N, Tavano A. Reply to 'What oscillations can do for syntax depends on your theory of structure building'. Nat Rev Neurosci 2023; 24:724. [PMID: 37696997 DOI: 10.1038/s41583-023-00735-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Affiliation(s)
- Nina Kazanina
- University of Bristol, Bristol, UK.
- Higher School of Economics, Moscow, Russia.
| | - Alessandro Tavano
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
3
|
Kazanina N, Tavano A. What neural oscillations can and cannot do for syntactic structure building. Nat Rev Neurosci 2023; 24:113-128. [PMID: 36460920 DOI: 10.1038/s41583-022-00659-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2022] [Indexed: 12/04/2022]
Abstract
Understanding what someone says requires relating words in a sentence to one another as instructed by the grammatical rules of a language. In recent years, the neurophysiological basis for this process has become a prominent topic of discussion in cognitive neuroscience. Current proposals about the neural mechanisms of syntactic structure building converge on a key role for neural oscillations in this process, but they differ in terms of the exact function that is assigned to them. In this Perspective, we discuss two proposed functions for neural oscillations - chunking and multiscale information integration - and evaluate their merits and limitations taking into account a fundamentally hierarchical nature of syntactic representations in natural languages. We highlight insights that provide a tangible starting point for a neurocognitive model of syntactic structure building.
Collapse
Affiliation(s)
- Nina Kazanina
- University of Bristol, Bristol, UK.
- Higher School of Economics, Moscow, Russia.
| | | |
Collapse
|
4
|
Kleyko D, Davies M, Frady EP, Kanerva P, Kent SJ, Olshausen BA, Osipov E, Rabaey JM, Rachkovskij DA, Rahimi A, Sommer FT. Vector Symbolic Architectures as a Computing Framework for Emerging Hardware. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:1538-1571. [PMID: 37868615 PMCID: PMC10588678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
Collapse
Affiliation(s)
- Denis Kleyko
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA and also with the Intelligent Systems Lab at Research Institutes of Sweden, 16440 Kista, Sweden
| | - Mike Davies
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - E Paxon Frady
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Evgeny Osipov
- Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Jan M Rabaey
- Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA 94720, USA
| | - Dmitri A Rachkovskij
- International Research and Training Center for Information Technologies and Systems, 03680 Kyiv, Ukraine, and with the Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Abbas Rahimi
- IBM Research - Zurich, 8803 Rüschlikon, Switzerland
| | - Friedrich T Sommer
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA and also with the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| |
Collapse
|
5
|
Martin AE, Baggio G. Modelling meaning composition from formalism to mechanism. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190298. [PMID: 31840588 PMCID: PMC6939358 DOI: 10.1098/rstb.2019.0298] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/28/2019] [Indexed: 01/19/2023] Open
Abstract
Human thought and language have extraordinary expressive power because meaningful parts can be assembled into more complex semantic structures. This partly underlies our ability to compose meanings into endlessly novel configurations, and sets us apart from other species and current computing devices. Crucially, human behaviour, including language use and linguistic data, indicates that composing parts into complex structures does not threaten the existence of constituent parts as independent units in the system: parts and wholes exist simultaneously yet independently from one another in the mind and brain. This independence is evident in human behaviour, but it seems at odds with what is known about the brain's exquisite sensitivity to statistical patterns: everyday language use is productive and expressive precisely because it can go beyond statistical regularities. Formal theories in philosophy and linguistics explain this fact by assuming that language and thought are compositional: systems of representations that separate a variable (or role) from its values (fillers), such that the meaning of a complex expression is a function of the values assigned to the variables. The debate on whether and how compositional systems could be implemented in minds, brains and machines remains vigorous. However, it has not yet resulted in mechanistic models of semantic composition: how, then, are the constituents of thoughts and sentences put and held together? We review and discuss current efforts at understanding this problem, and we chart possible routes for future research. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.
Collapse
Affiliation(s)
- Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| | - Giosuè Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|