1
|
Nojiri E, Takase K. Understanding Sensory-Motor Disorders in Autism Spectrum Disorders by Extending Hebbian Theory: Formation of a Rigid-Autonomous Phase Sequence. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2025; 20:276-289. [PMID: 37910043 DOI: 10.1177/17456916231202674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Autism spectrum disorder is a neuropsychiatric disorder characterized by persistent deficits in social communication and social interaction and restricted, repetitive patterns of behavior, interests, or activities. The symptoms invariably appear in early childhood and cause significant impairment in social, occupational, and other important functions. Various abnormalities in the genetic, neurological, and endocrine systems of patients with autism spectrum disorder have been reported as the etiology; however, no clear factor leading to the onset of the disease has been identified. Additionally, higher order cognitive dysfunctions, which are represented by a lack of theory of mind, sensorimotor disorders, and memory-related disorders (e.g., flashbacks), have been reported in recent years, but no theoretical framework has been proposed to explain these behavioral abnormalities. In this study, we extended Hebb's biopsychology theory to provide a theoretical framework that comprehensively explains the various behavioral abnormalities observed in autism spectrum disorder. Specifically, we propose that a wide range of symptoms in autism spectrum disorder may be caused by the formation of a rigid-autonomous phase sequence (RAPS) in the brain. Using the RAPS formation theory, we propose a biopsychological mechanism that could be a target for the treatment of autism spectrum disorders.
Collapse
|
2
|
Somashekar BP, Bhalla US. Discriminating neural ensemble patterns through dendritic computations in randomly connected feedforward networks. eLife 2025; 13:RP100664. [PMID: 39854248 PMCID: PMC11759408 DOI: 10.7554/elife.100664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2025] Open
Abstract
Co-active or temporally ordered neural ensembles are a signature of salient sensory, motor, and cognitive events. Local convergence of such patterned activity as synaptic clusters on dendrites could help single neurons harness the potential of dendritic nonlinearities to decode neural activity patterns. We combined theory and simulations to assess the likelihood of whether projections from neural ensembles could converge onto synaptic clusters even in networks with random connectivity. Using rat hippocampal and cortical network statistics, we show that clustered convergence of axons from three to four different co-active ensembles is likely even in randomly connected networks, leading to representation of arbitrary input combinations in at least 10 target neurons in a 100,000 population. In the presence of larger ensembles, spatiotemporally ordered convergence of three to five axons from temporally ordered ensembles is also likely. These active clusters result in higher neuronal activation in the presence of strong dendritic nonlinearities and low background activity. We mathematically and computationally demonstrate a tight interplay between network connectivity, spatiotemporal scales of subcellular electrical and chemical mechanisms, dendritic nonlinearities, and uncorrelated background activity. We suggest that dendritic clustered and sequence computation is pervasive, but its expression as somatic selectivity requires confluence of physiology, background activity, and connectomics.
Collapse
Affiliation(s)
- Bhanu Priya Somashekar
- National Centre for Biological Sciences, Tata Institute of Fundamental ResearchBangaloreIndia
| | - Upinder Singh Bhalla
- National Centre for Biological Sciences, Tata Institute of Fundamental ResearchBangaloreIndia
| |
Collapse
|
3
|
Dabagia M, Papadimitriou CH, Vempala SS. Computation With Sequences of Assemblies in a Model of the Brain. Neural Comput 2024; 37:193-233. [PMID: 39383019 DOI: 10.1162/neco_a_01720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/08/2024] [Indexed: 10/11/2024]
Abstract
Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain's learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain's remarkable abilities to compute and learn, with sequences playing a vital role.
Collapse
Affiliation(s)
- Max Dabagia
- School of Computer Science, Georgia Tech, Atlanta, GA 30332, U.S.A.
| | | | | |
Collapse
|
4
|
Fenton AA. Remapping revisited: how the hippocampus represents different spaces. Nat Rev Neurosci 2024; 25:428-448. [PMID: 38714834 DOI: 10.1038/s41583-024-00817-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2024] [Indexed: 05/25/2024]
Abstract
The representation of distinct spaces by hippocampal place cells has been linked to changes in their place fields (the locations in the environment where the place cells discharge strongly), a phenomenon that has been termed 'remapping'. Remapping has been assumed to be accompanied by the reorganization of subsecond cofiring relationships among the place cells, potentially maximizing hippocampal information coding capacity. However, several observations challenge this standard view. For example, place cells exhibit mixed selectivity, encode non-positional variables, can have multiple place fields and exhibit unreliable discharge in fixed environments. Furthermore, recent evidence suggests that, when measured at subsecond timescales, the moment-to-moment cofiring of a pair of cells in one environment is remarkably similar in another environment, despite remapping. Here, I propose that remapping is a misnomer for the changes in place fields across environments and suggest instead that internally organized manifold representations of hippocampal activity are actively registered to different environments to enable navigation, promote memory and organize knowledge.
Collapse
Affiliation(s)
- André A Fenton
- Center for Neural Science, New York University, New York, NY, USA.
- Neuroscience Institute at the NYU Langone Medical Center, New York, NY, USA.
| |
Collapse
|
5
|
Chang WL, Hen R. Adult Neurogenesis, Context Encoding, and Pattern Separation: A Pathway for Treating Overgeneralization. ADVANCES IN NEUROBIOLOGY 2024; 38:163-193. [PMID: 39008016 DOI: 10.1007/978-3-031-62983-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
In mammals, the subgranular zone of the dentate gyrus is one of two brain regions (with the subventricular zone of the olfactory bulb) that continues to generate new neurons throughout adulthood, a phenomenon known as adult hippocampal neurogenesis (AHN) (Eriksson et al., Nat Med 4:1313-1317, 1998; García-Verdugo et al., J Neurobiol 36:234-248, 1998). The integration of these new neurons into the dentate gyrus (DG) has implications for memory encoding, with unique firing and wiring properties of immature neurons that affect how the hippocampal network encodes and stores attributes of memory. In this chapter, we will describe the process of AHN and properties of adult-born cells as they integrate into the hippocampal circuit and mature. Then, we will discuss some methodological considerations before we review evidence for the role of AHN in two major processes supporting memory that are performed by the DG. First, we will discuss encoding of contextual information for episodic memories and how this is facilitated by AHN. Second, will discuss pattern separation, a major role of the DG that reduces interference for the formation of new memories. Finally, we will review clinical and translational considerations, suggesting that stimulation of AHN may help decrease overgeneralization-a common endophenotype of mood, anxiety, trauma-related, and age-related disorders.
Collapse
Affiliation(s)
- Wei-Li Chang
- Departments of Psychiatry and Neuroscience, Columbia University, New York, NY, USA
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, NY, USA
| | - Rene Hen
- Departments of Psychiatry and Neuroscience, Columbia University, New York, NY, USA.
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, NY, USA.
| |
Collapse
|
6
|
Mok RM, Love BC. A multilevel account of hippocampal function in spatial and concept learning: Bridging models of behavior and neural assemblies. SCIENCE ADVANCES 2023; 9:eade6903. [PMID: 37478189 PMCID: PMC10361583 DOI: 10.1126/sciadv.ade6903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 06/20/2023] [Indexed: 07/23/2023]
Abstract
A complete neuroscience requires multilevel theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. We propose an extension to the level of mechanism approach where a computational model of cognition sits in between behavior and brain: It explains the higher-level behavior and can be decomposed into lower-level component mechanisms to provide a richer understanding of the system than any level alone. Toward this end, we decomposed a cognitive model into neuron-like units using a neural flocking approach that parallels recurrent hippocampal activity. Neural flocking coordinates units that collectively form higher-level mental constructs. The decomposed model suggested how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations and why so many neurons are needed for robust performance at the cognitive level. This multilevel explanation provides a way to understand how cognition and symbol-like representations are supported by coordinated neural populations (assemblies) formed through learning.
Collapse
Affiliation(s)
- Robert M. Mok
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Bradley C. Love
- UCL Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, UK
- The Alan Turing Institute, London, United Kingdom
| |
Collapse
|
7
|
Srinivasan A, Srinivasan A, Goodman MR, Riceberg JS, Guise KG, Shapiro ML. Hippocampal and Medial Prefrontal Cortex Fractal Spiking Patterns Encode Episodes and Rules. CHAOS, SOLITONS, AND FRACTALS 2023; 171:113508. [PMID: 37251275 PMCID: PMC10217776 DOI: 10.1016/j.chaos.2023.113508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
A central question in neuroscience is how the brain represents and processes information to guide behavior. The principles that organize brain computations are not fully known, and could include scale-free, or fractal patterns of neuronal activity. Scale-free brain activity may be a natural consequence of the relatively small subsets of neuronal populations that respond to task features, i.e., sparse coding. The size of the active subsets constrains the possible sequences of inter-spike intervals (ISI), and selecting from this limited set may produce firing patterns across wide-ranging timescales that form fractal spiking patterns. To investigate the extent to which fractal spiking patterns corresponded with task features, we analyzed ISIs in simultaneously recorded populations of CA1 and medial prefrontal cortical (mPFC) neurons in rats performing a spatial memory task that required both structures. CA1 and mPFC ISI sequences formed fractal patterns that predicted memory performance. CA1 pattern duration, but not length or content, varied with learning speed and memory performance whereas mPFC patterns did not. The most common CA1 and mPFC patterns corresponded with each region's cognitive function: CA1 patterns encoded behavioral episodes which linked the start, choice, and goal of paths through the maze whereas mPFC patterns encoded behavioral "rules" which guided goal selection. mPFC patterns predicted changing CA1 spike patterns only as animals learned new rules. Together, the results suggest that CA1 and mPFC population activity may predict choice outcomes by using fractal ISI patterns to compute task features.
Collapse
Affiliation(s)
- Aditya Srinivasan
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208
| | - Arvind Srinivasan
- College of Health Sciences, California Northstate University, 2910 Prospect Park Drive, Rancho Cordova, CA 95670
| | - Michael R. Goodman
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208
| | - Justin S. Riceberg
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, Hess Center for Science and Medicine, 1470 Madison Avenue New York, NY 10029
| | - Kevin G. Guise
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, Hess Center for Science and Medicine, 1470 Madison Avenue New York, NY 10029
| | - Matthew L. Shapiro
- Department of Neuroscience and Experimental Therapeutics, Albany Medical College, 47 New Scotland Ave, Mail Code 126, Albany, NY 12208
| |
Collapse
|
8
|
Libedinsky C. Comparing representations and computations in single neurons versus neural networks. Trends Cogn Sci 2023; 27:517-527. [PMID: 37005114 DOI: 10.1016/j.tics.2023.03.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 04/03/2023]
Abstract
Single-neuron-level explanations have been the gold standard in neuroscience for decades. Recently, however, neural-network-level explanations have become increasingly popular. This increase in popularity is driven by the fact that the analysis of neural networks can solve problems that cannot be addressed by analyzing neurons independently. In this opinion article, I argue that while both frameworks employ the same general logic to link physical and mental phenomena, in many cases the neural network framework provides better explanatory objects to understand representations and computations related to mental phenomena. I discuss what constitutes a mechanistic explanation in neural systems, provide examples, and conclude by highlighting a number of the challenges and considerations associated with the use of analyses of neural networks to study brain function.
Collapse
|
9
|
van der Plas TL, Tubiana J, Le Goc G, Migault G, Kunst M, Baier H, Bormuth V, Englitz B, Debrégeas G. Neural assemblies uncovered by generative modeling explain whole-brain activity statistics and reflect structural connectivity. eLife 2023; 12:83139. [PMID: 36648065 PMCID: PMC9940913 DOI: 10.7554/elife.83139] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 01/15/2023] [Indexed: 01/18/2023] Open
Abstract
Patterns of endogenous activity in the brain reflect a stochastic exploration of the neuronal state space that is constrained by the underlying assembly organization of neurons. Yet, it remains to be shown that this interplay between neurons and their assembly dynamics indeed suffices to generate whole-brain data statistics. Here, we recorded the activity from ∼40,000 neurons simultaneously in zebrafish larvae, and show that a data-driven generative model of neuron-assembly interactions can accurately reproduce the mean activity and pairwise correlation statistics of their spontaneous activity. This model, the compositional Restricted Boltzmann Machine (cRBM), unveils ∼200 neural assemblies, which compose neurophysiological circuits and whose various combinations form successive brain states. We then performed in silico perturbation experiments to determine the interregional functional connectivity, which is conserved across individual animals and correlates well with structural connectivity. Our results showcase how cRBMs can capture the coarse-grained organization of the zebrafish brain. Notably, this generative model can readily be deployed to parse neural data obtained by other large-scale recording techniques.
Collapse
Affiliation(s)
- Thijs L van der Plas
- Computational Neuroscience Lab, Department of Neurophysiology, Donders Center for Neuroscience, Radboud UniversityNijmegenNetherlands
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Jérôme Tubiana
- Blavatnik School of Computer Science, Tel Aviv UniversityTel AvivIsrael
| | - Guillaume Le Goc
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Geoffrey Migault
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Michael Kunst
- Department Genes – Circuits – Behavior, Max Planck Institute for Biological IntelligenceMartinsriedGermany
- Allen Institute for Brain ScienceSeattleUnited States
| | - Herwig Baier
- Department Genes – Circuits – Behavior, Max Planck Institute for Biological IntelligenceMartinsriedGermany
| | - Volker Bormuth
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| | - Bernhard Englitz
- Computational Neuroscience Lab, Department of Neurophysiology, Donders Center for Neuroscience, Radboud UniversityNijmegenNetherlands
| | - Georges Debrégeas
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP)ParisFrance
| |
Collapse
|
10
|
Monaco JD, Hwang GM. Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022; 16:1-13. [PMID: 39129840 PMCID: PMC11306504 DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/15/2022] [Indexed: 12/28/2022]
Abstract
Artificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies-properly conceived as reentrant dynamical flows and not merely as identified groups of neurons-may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
Affiliation(s)
- Joseph D. Monaco
- Dept of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Grace M. Hwang
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD USA
| |
Collapse
|
11
|
Kaufman MT, Benna MK, Rigotti M, Stefanini F, Fusi S, Churchland AK. The implications of categorical and category-free mixed selectivity on representational geometries. Curr Opin Neurobiol 2022; 77:102644. [PMID: 36332415 DOI: 10.1016/j.conb.2022.102644] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/29/2022] [Accepted: 09/26/2022] [Indexed: 01/10/2023]
Abstract
The firing rates of individual neurons displaying mixed selectivity are modulated by multiple task variables. When mixed selectivity is nonlinear, it confers an advantage by generating a high-dimensional neural representation that can be flexibly decoded by linear classifiers. Although the advantages of this coding scheme are well accepted, the means of designing an experiment and analyzing the data to test for and characterize mixed selectivity remain unclear. With the growing number of large datasets collected during complex tasks, the mixed selectivity is increasingly observed and is challenging to interpret correctly. We review recent approaches for analyzing and interpreting neural datasets and clarify the theoretical implications of mixed selectivity in the variety of forms that have been reported in the literature. We also aim to provide a practical guide for determining whether a neural population has linear or nonlinear mixed selectivity and whether this mixing leads to a categorical or category-free representation.
Collapse
Affiliation(s)
- Matthew T Kaufman
- Department of Organismal Biology and Anatomy, Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Marcus K Benna
- Department of Neurobiology, School of Biological Sciences, University of California, San Diego, CA, USA
| | | | - Fabio Stefanini
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - Stefano Fusi
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Anne K Churchland
- David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
12
|
Lehr AB, Luboeinski J, Tetzlaff C. Neuromodulator-dependent synaptic tagging and capture retroactively controls neural coding in spiking neural networks. Sci Rep 2022; 12:17772. [PMID: 36273097 PMCID: PMC9588040 DOI: 10.1038/s41598-022-22430-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 10/14/2022] [Indexed: 01/19/2023] Open
Abstract
Events that are important to an individual's life trigger neuromodulator release in brain areas responsible for cognitive and behavioral function. While it is well known that the presence of neuromodulators such as dopamine and norepinephrine is required for memory consolidation, the impact of neuromodulator concentration is, however, less understood. In a recurrent spiking neural network model featuring neuromodulator-dependent synaptic tagging and capture, we study how synaptic memory consolidation depends on the amount of neuromodulator present in the minutes to hours after learning. We find that the storage of rate-based and spike timing-based information is controlled by the level of neuromodulation. Specifically, we find better recall of temporal information for high levels of neuromodulation, while we find better recall of rate-coded spatial patterns for lower neuromodulation, mediated by the selection of different groups of synapses for consolidation. Hence, our results indicate that in minutes to hours after learning, the level of neuromodulation may alter the process of synaptic consolidation to ultimately control which type of information becomes consolidated in the recurrent neural network.
Collapse
Affiliation(s)
- Andrew B. Lehr
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| | - Jannik Luboeinski
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| |
Collapse
|
13
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
14
|
Mill RD, Hamilton JL, Winfield EC, Lalta N, Chen RH, Cole MW. Network modeling of dynamic brain interactions predicts emergence of neural information that supports human cognitive behavior. PLoS Biol 2022; 20:e3001686. [PMID: 35980898 PMCID: PMC9387855 DOI: 10.1371/journal.pbio.3001686] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 05/24/2022] [Indexed: 11/21/2022] Open
Abstract
How cognitive task behavior is generated by brain network interactions is a central question in neuroscience. Answering this question calls for the development of novel analysis tools that can firstly capture neural signatures of task information with high spatial and temporal precision (the "where and when") and then allow for empirical testing of alternative network models of brain function that link information to behavior (the "how"). We outline a novel network modeling approach suited to this purpose that is applied to noninvasive functional neuroimaging data in humans. We first dynamically decoded the spatiotemporal signatures of task information in the human brain by combining MRI-individualized source electroencephalography (EEG) with multivariate pattern analysis (MVPA). A newly developed network modeling approach-dynamic activity flow modeling-then simulated the flow of task-evoked activity over more causally interpretable (relative to standard functional connectivity [FC] approaches) resting-state functional connections (dynamic, lagged, direct, and directional). We demonstrate the utility of this modeling approach by applying it to elucidate network processes underlying sensory-motor information flow in the brain, revealing accurate predictions of empirical response information dynamics underlying behavior. Extending the model toward simulating network lesions suggested a role for the cognitive control networks (CCNs) as primary drivers of response information flow, transitioning from early dorsal attention network-dominated sensory-to-response transformation to later collaborative CCN engagement during response selection. These results demonstrate the utility of the dynamic activity flow modeling approach in identifying the generative network processes underlying neurocognitive phenomena.
Collapse
Affiliation(s)
- Ravi D. Mill
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Julia L. Hamilton
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Emily C. Winfield
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Nicole Lalta
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Richard H. Chen
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
- Behavioral and Neural Sciences Graduate Program, Rutgers University, Newark, New Jersey, United States of America
| | - Michael W. Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| |
Collapse
|
15
|
Herry C, Jercog D. Decoding defensive systems. Curr Opin Neurobiol 2022; 76:102600. [PMID: 35809501 DOI: 10.1016/j.conb.2022.102600] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/21/2022] [Accepted: 05/30/2022] [Indexed: 11/26/2022]
Abstract
Our understanding of the neuronal circuits and mechanisms of defensive systems has been primarily dominated by studies focusing on the contribution of individual cells in the processing of threat-predictive cues, defensive responses, the extinction of such responses and the contextual modulation of threat-related behavior. These studies have been key in establishing threat-related circuits and mechanisms. Yet, they fall short in answering long-standing questions related to the integrative processing of distinct threatening cues, behavioral states induced by threat-related events, or the bridging from sensory processing of threat-related cues to specific defensive responses. Recent conceptual and technical developments has allowed the monitoring of large populations of neurons, which in addition to advanced analytic tools, have improved our understanding of how collective neuronal activity supports threat-related behaviors. In this review, we discuss the current knowledge of neuronal population codes within threat-related networks, in the context of aversive motivated behavior and the study of defensive systems.
Collapse
Affiliation(s)
- Cyril Herry
- INSERM, Neurocentre Magendie, U1215, 146 Rue Léo-Saignat, 33077 Bordeaux, France; Univ. Bordeaux, Neurocentre Magendie, U1215, 146 Rue Léo-Saignat, 33077 Bordeaux, France.
| | - Daniel Jercog
- INSERM, Neurocentre Magendie, U1215, 146 Rue Léo-Saignat, 33077 Bordeaux, France; Univ. Bordeaux, Neurocentre Magendie, U1215, 146 Rue Léo-Saignat, 33077 Bordeaux, France.
| |
Collapse
|
16
|
Organization and Priming of Long-term Memory Representations with Two-phase Plasticity. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10021-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Abstract
Background / Introduction
In recurrent neural networks in the brain, memories are represented by so-called Hebbian cell assemblies. Such assemblies are groups of neurons with particularly strong synaptic connections formed by synaptic plasticity and consolidated by synaptic tagging and capture (STC). To link these synaptic mechanisms to long-term memory on the level of cognition and behavior, their functional implications on the level of neural networks have to be understood.
Methods
We employ a biologically detailed recurrent network of spiking neurons featuring synaptic plasticity and STC to model the learning and consolidation of long-term memory representations. Using this, we investigate the effects of different organizational paradigms, and of priming stimulation, on the functionality of multiple memory representations. We quantify these effects by the spontaneous activation of memory representations driven by background noise.
Results
We find that the learning order of the memory representations significantly biases the likelihood of activation towards more recently learned representations, and that hub-like overlap structure counters this effect. We identify long-term depression as the mechanism underlying these findings. Finally, we demonstrate that STC has functional consequences for the interaction of long-term memory representations: 1. intermediate consolidation in between learning the individual representations strongly alters the previously described effects, and 2. STC enables the priming of a long-term memory representation on a timescale of minutes to hours.
Conclusion
Our findings show how synaptic and neuronal mechanisms can provide an explanatory basis for known cognitive effects.
Collapse
|
17
|
Higgins I, Racanière S, Rezende D. Symmetry-Based Representations for Artificial and Biological General Intelligence. Front Comput Neurosci 2022; 16:836498. [PMID: 35493854 PMCID: PMC9049963 DOI: 10.3389/fncom.2022.836498] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 03/08/2022] [Indexed: 11/13/2022] Open
Abstract
Biological intelligence is remarkable in its ability to produce complex behavior in many diverse situations through data efficient, generalizable, and transferable skill acquisition. It is believed that learning "good" sensory representations is important for enabling this, however there is little agreement as to what a good representation should look like. In this review article we are going to argue that symmetry transformations are a fundamental principle that can guide our search for what makes a good representation. The idea that there exist transformations (symmetries) that affect some aspects of the system but not others, and their relationship to conserved quantities has become central in modern physics, resulting in a more unified theoretical framework and even ability to predict the existence of new particles. Recently, symmetries have started to gain prominence in machine learning too, resulting in more data efficient and generalizable algorithms that can mimic some of the complex behaviors produced by biological intelligence. Finally, first demonstrations of the importance of symmetry transformations for representation learning in the brain are starting to arise in neuroscience. Taken together, the overwhelming positive effect that symmetries bring to these disciplines suggest that they may be an important general framework that determines the structure of the universe, constrains the nature of natural tasks and consequently shapes both biological and artificial intelligence.
Collapse
|
18
|
Benna MK, Fusi S. Place cells may simply be memory cells: Memory compression leads to spatial tuning and history dependence. Proc Natl Acad Sci U S A 2021; 118:e2018422118. [PMID: 34916282 PMCID: PMC8713479 DOI: 10.1073/pnas.2018422118] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2021] [Indexed: 11/18/2022] Open
Abstract
The observation of place cells has suggested that the hippocampus plays a special role in encoding spatial information. However, place cell responses are modulated by several nonspatial variables and reported to be rather unstable. Here, we propose a memory model of the hippocampus that provides an interpretation of place cells consistent with these observations. We hypothesize that the hippocampus is a memory device that takes advantage of the correlations between sensory experiences to generate compressed representations of the episodes that are stored in memory. A simple neural network model that can efficiently compress information naturally produces place cells that are similar to those observed in experiments. It predicts that the activity of these cells is variable and that the fluctuations of the place fields encode information about the recent history of sensory experiences. Place cells may simply be a consequence of a memory compression process implemented in the hippocampus.
Collapse
Affiliation(s)
- Marcus K Benna
- Center for Theoretical Neuroscience, Columbia University, New York, NY 10027;
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY 10027;
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
- Kavli Institute for Brain Sciences, Columbia University, New York, NY 10027
| |
Collapse
|
19
|
Liu R, Azabou M, Dabagia M, Lin CH, Azar MG, Hengen KB, Valko M, Dyer EL. Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:10587-10599. [PMID: 36467015 PMCID: PMC9713686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
Collapse
|
20
|
Papadimitriou CH, Friederici AD. Bridging the Gap Between Neurons and Cognition Through Assemblies of Neurons. Neural Comput 2021; 34:291-306. [PMID: 34915560 DOI: 10.1162/neco_a_01463] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/02/2021] [Indexed: 11/04/2022]
Abstract
During recent decades, our understanding of the brain has advanced dramatically at both the cellular and molecular levels and at the cognitive neurofunctional level; however, a huge gap remains between the microlevel of physiology and the macrolevel of cognition. We propose that computational models based on assemblies of neurons can serve as a blueprint for bridging these two scales. We discuss recently developed computational models of assemblies that have been demonstrated to mediate higher cognitive functions such as the processing of simple sentences, to be realistically realizable by neural activity, and to possess general computational power.
Collapse
Affiliation(s)
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, D-04303 Leipzig, Germany
| |
Collapse
|
21
|
Brown RE, Bligh TWB, Garden JF. The Hebb Synapse Before Hebb: Theories of Synaptic Function in Learning and Memory Before , With a Discussion of the Long-Lost Synaptic Theory of William McDougall. Front Behav Neurosci 2021; 15:732195. [PMID: 34744652 PMCID: PMC8566713 DOI: 10.3389/fnbeh.2021.732195] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/20/2021] [Indexed: 11/30/2022] Open
Abstract
Since the work of Semon was rediscovered by Schacter in 1978, there has been a renewed interest is searching for the "engram" as the locus of memory in the brain and Hebb's cell assembly has been equated with Semon's engram. There have been many theories of memory involving some concept of synaptic change, culminating in the "Hebb Synapse" theory in 1949. However, Hebb said that the idea that any two cells or systems of cells that are repeatedly active at the same time will tend to become "associated," was not his idea, but an old one. In this manuscript we give an overview of some of the theories of the neural basis of learning and memory before Hebb and describe the synaptic theory of William McDougall, which appears to have been an idea ahead of its time; so far ahead of its time that it was completely ignored by his contemporaries. We conclude by examining some critiques of McDougall's theory of inhibition and with a short discussion on the fate of neuroscientists whose ideas were neglected when first presented but were accepted as important many decades later.
Collapse
Affiliation(s)
- Richard E. Brown
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, NS, Canada
| | | | | |
Collapse
|
22
|
Chung S, Abbott LF. Neural population geometry: An approach for understanding biological and artificial neural networks. Curr Opin Neurobiol 2021; 70:137-144. [PMID: 34801787 PMCID: PMC10695674 DOI: 10.1016/j.conb.2021.10.010] [Citation(s) in RCA: 110] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 10/07/2021] [Accepted: 10/27/2021] [Indexed: 12/27/2022]
Abstract
Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement, and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures, and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, population activities, and behavior.
Collapse
Affiliation(s)
- SueYeon Chung
- Center for Theoretical Neuroscience, Columbia University, New York City, United States.
| | - L F Abbott
- Center for Theoretical Neuroscience, Columbia University, New York City, United States
| |
Collapse
|
23
|
Parde CJ, Colón YI, Hill MQ, Castillo CD, Dhar P, O'Toole AJ. Closing the gap between single-unit and neural population codes: Insights from deep learning in face recognition. J Vis 2021; 21:15. [PMID: 34379084 PMCID: PMC8363775 DOI: 10.1167/jov.21.8.15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 06/19/2021] [Indexed: 12/03/2022] Open
Abstract
Single-unit responses and population codes differ in the "read-out" information they provide about high-level visual representations. Diverging local and global read-outs can be difficult to reconcile with in vivo methods. To bridge this gap, we studied the relationship between single-unit and ensemble codes for identity, gender, and viewpoint, using a deep convolutional neural network (DCNN) trained for face recognition. Analogous to the primate visual system, DCNNs develop representations that generalize over image variation, while retaining subject (e.g., gender) and image (e.g., viewpoint) information. At the unit level, we measured the number of single units needed to predict attributes (identity, gender, viewpoint) and the predictive value of individual units for each attribute. Identification was remarkably accurate using random samples of only 3% of the network's output units, and all units had substantial identity-predicting power. Cross-unit responses were minimally correlated, indicating that single units code non-redundant identity cues. Gender and viewpoint classification required large-scale pooling of units-individual units had weak predictive power. At the ensemble level, principal component analysis of face representations showed that identity, gender, and viewpoint separated into high-dimensional subspaces, ordered by explained variance. Unit-based directions in the representational space were compared with the directions associated with the attributes. Identity, gender, and viewpoint contributed to all individual unit responses, undercutting a neural tuning analogy. Instead, single-unit responses carry superimposed, distributed codes for face identity, gender, and viewpoint. This undermines confidence in the interpretation of neural representations from unit response profiles for both DCNNs and, by analogy, high-level vision.
Collapse
Affiliation(s)
- Connor J Parde
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Y Ivette Colón
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Matthew Q Hill
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Carlos D Castillo
- University of Maryland Institute of Advanced Computer Studies, University of Maryland, College Park, MD, USA
| | - Prithviraj Dhar
- University of Maryland Institute of Advanced Computer Studies, University of Maryland, College Park, MD, USA
| | - Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
24
|
Herzog R, Morales A, Mora S, Araya J, Escobar MJ, Palacios AG, Cofré R. Scalable and accurate method for neuronal ensemble detection in spiking neural networks. PLoS One 2021; 16:e0251647. [PMID: 34329314 PMCID: PMC8323916 DOI: 10.1371/journal.pone.0251647] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/29/2021] [Indexed: 11/19/2022] Open
Abstract
We propose a novel, scalable, and accurate method for detecting neuronal ensembles from a population of spiking neurons. Our approach offers a simple yet powerful tool to study ensemble activity. It relies on clustering synchronous population activity (population vectors), allows the participation of neurons in different ensembles, has few parameters to tune and is computationally efficient. To validate the performance and generality of our method, we generated synthetic data, where we found that our method accurately detects neuronal ensembles for a wide range of simulation parameters. We found that our method outperforms current alternative methodologies. We used spike trains of retinal ganglion cells obtained from multi-electrode array recordings under a simple ON-OFF light stimulus to test our method. We found a consistent stimuli-evoked ensemble activity intermingled with spontaneously active ensembles and irregular activity. Our results suggest that the early visual system activity could be organized in distinguishable functional ensembles. We provide a Graphic User Interface, which facilitates the use of our method by the scientific community.
Collapse
Affiliation(s)
- Rubén Herzog
- Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile
| | - Arturo Morales
- Departamento de Electrónica, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | - Soraya Mora
- Facultad de Medicina y Ciencia, Universidad San Sebastián, Santiago, Chile
- Laboratorio de Biología Computacional, Fundación Ciencia y Vida, Santiago, Chile
| | - Joaquín Araya
- Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile
- Escuela de Tecnología Médica, Facultad de Salud, Universidad Santo Tomás, Santiago, Chile
| | - María-José Escobar
- Departamento de Electrónica, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | - Adrian G. Palacios
- Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile
| | - Rodrigo Cofré
- CIMFAV Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso, Chile
| |
Collapse
|
25
|
Abstract
Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the brain that are largely implicit in the literature. The Sherringtonian view seeks to explain cognition as the result of operations on signals performed at nodes in a network and passed between them that are implemented by specific neurons and their connections in circuits in the brain. The contrasting Hopfieldian view explains cognition as the result of transformations between or movement within representational spaces that are implemented by neural populations. Thus, the Hopfieldian view relegates details regarding the identity of and connections between specific neurons to the status of secondary explainers. Only the Hopfieldian approach has the representational and computational resources needed to develop novel neurofunctional objects that can serve as primary explainers of cognition.
Collapse
Affiliation(s)
- David L Barack
- Department of Philosopy, University of Pennsylvania, Philadelphia, PA, USA. .,Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA.
| | - John W Krakauer
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,The Santa Fe Institute, Santa Fe, NM, USA.
| |
Collapse
|
26
|
Abstract
In 2005, the Moser group identified a new type of cell in the entorhinal cortex (ERC): the grid cell (Hafting, Nature, 436, 2005, pp. 801-806). A landmark series of studies from these investigators showed that grid cells support spatial navigation by encoding position, direction as well as distance information, and they subsequently found grid cells in pre- and para-subiculum areas adjacent to the ERC (Boccara, Nature Neuroscience, 13, 2010, pp. 987-994). Fast forward to 2010, when some clever investigators developed fMRI analysis methods to document grid-like responses in the human ERC (Doeller, Nature, 463, 2010, pp. 657-661). What was not at all expected was the co-identification of grid-like fMRI responses outside of the ERC, in particular, the orbitofrontal cortex (OFC) and the ventromedial prefrontal cortex (vmPFC). Here we provide a compact overview of the burgeoning literature on grid cells in both rodent and human species, while considering the intriguing question: what are grid-like responses doing in the OFC and vmPFC? (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Clara U. Raithel
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, 3400 Hamilton Walk, Stemmler Hall, Room G10, Philadelphia, PA 19104, USA
- Department of Psychology, School of Arts and Sciences, University of Pennsylvania, 425 S. University Avenue, Stephen A. Levin Building, Philadelphia, PA, 19104, USA
| | - Jay A. Gottfried
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, 3400 Hamilton Walk, Stemmler Hall, Room G10, Philadelphia, PA 19104, USA
- Department of Psychology, School of Arts and Sciences, University of Pennsylvania, 425 S. University Avenue, Stephen A. Levin Building, Philadelphia, PA, 19104, USA
| |
Collapse
|
27
|
Luboeinski J, Tetzlaff C. Memory consolidation and improvement by synaptic tagging and capture in recurrent neural networks. Commun Biol 2021; 4:275. [PMID: 33658641 PMCID: PMC7977149 DOI: 10.1038/s42003-021-01778-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 01/21/2021] [Indexed: 11/09/2022] Open
Abstract
The synaptic-tagging-and-capture (STC) hypothesis formulates that at each synapse the concurrence of a tag with protein synthesis yields the maintenance of changes induced by synaptic plasticity. This hypothesis provides a biological principle underlying the synaptic consolidation of memories that is not verified for recurrent neural circuits. We developed a theoretical model integrating the mechanisms underlying the STC hypothesis with calcium-based synaptic plasticity in a recurrent spiking neural network. In the model, calcium-based synaptic plasticity yields the formation of strongly interconnected cell assemblies encoding memories, followed by consolidation through the STC mechanisms. Furthermore, we show for the first time that STC mechanisms modify the storage of memories such that after several hours memory recall is significantly improved. We identify two contributing processes: a merely time-dependent passive improvement, and an active improvement during recall. The described characteristics can provide a new principle for storing information in biological and artificial neural circuits.
Collapse
Affiliation(s)
- Jannik Luboeinski
- Department of Computational Neuroscience, III. Institute of Physics-Biophysics, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
| | - Christian Tetzlaff
- Department of Computational Neuroscience, III. Institute of Physics-Biophysics, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
| |
Collapse
|
28
|
Why Have Two When One Will Do? Comparing Task Representations across Amygdala and Prefrontal Cortex in Single Neurons and Neuronal Populations. Neuron 2020; 107:597-599. [PMID: 32818473 DOI: 10.1016/j.neuron.2020.07.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Many brain areas represent aspects of learned behavior. How do representations differ between regions? In this issue of Neuron, Kyriazi et al. (2020) show how the amygdala and prefrontal cortex use distinct strategies to code features of a complex task.
Collapse
|
29
|
Stefanini F, Kushnir L, Jimenez JC, Jennings JH, Woods NI, Stuber GD, Kheirbek MA, Hen R, Fusi S. A Distributed Neural Code in the Dentate Gyrus and in CA1. Neuron 2020; 107:703-716.e4. [PMID: 32521223 PMCID: PMC7442694 DOI: 10.1016/j.neuron.2020.05.022] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 04/01/2020] [Accepted: 05/15/2020] [Indexed: 01/28/2023]
Abstract
Neurons are often considered specialized functional units that encode a single variable. However, many neurons are observed to respond to a mix of disparate sensory, cognitive, and behavioral variables. For such representations, information is distributed across multiple neurons. Here we find this distributed code in the dentate gyrus and CA1 subregions of the hippocampus. Using calcium imaging in freely moving mice, we decoded an animal's position, direction of motion, and speed from the activity of hundreds of cells. The response properties of individual neurons were only partially predictive of their importance for encoding position. Non-place cells encoded position and contributed to position encoding when combined with other cells. Indeed, disrupting the correlations between neural activities decreased decoding performance, mostly in CA1. Our analysis indicates that population methods rather than classical analyses based on single-cell response properties may more accurately characterize the neural code in the hippocampus.
Collapse
Affiliation(s)
- Fabio Stefanini
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Lyudmila Kushnir
- GNT-LNC, Départment d'Études Cognitives, École Normale Supérieure, INSERM, PSL Research University, 75005 Paris, France
| | - Jessica C Jimenez
- Departments of Neuroscience, Psychiatry, & Pharmacology, Columbia University, New York, NY, USA; Division of Integrative Neuroscience, Department of Psychiatry, New York State Psychiatric Institute, New York, NY, USA
| | - Joshua H Jennings
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Nicholas I Woods
- Neuroscience Graduate Program, University of California, San Francisco, San Francisco, CA, USA; Medical Scientist Training Program, University of California, San Francisco, San Francisco, CA, USA
| | - Garret D Stuber
- Center for the Neurobiology of Addiction, Pain, and Emotion, Department of Anesthesiology and Pain Medicine, Department of Pharmacology, University of Washington, Seattle, WA 98195, USA
| | - Mazen A Kheirbek
- Neuroscience Graduate Program, University of California, San Francisco, San Francisco, CA, USA; Department of Psychiatry, University of California, San Francisco, San Francisco, CA, USA; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| | - René Hen
- Departments of Neuroscience, Psychiatry, & Pharmacology, Columbia University, New York, NY, USA; Division of Integrative Neuroscience, Department of Psychiatry, New York State Psychiatric Institute, New York, NY, USA; Kavli Institute for Brain Sciences, Columbia University, New York, NY, USA.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Kavli Institute for Brain Sciences, Columbia University, New York, NY, USA.
| |
Collapse
|
30
|
Abstract
Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results. Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.
Collapse
|
31
|
Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science 2020; 367:367/6473/eaaw4325. [PMID: 31896692 DOI: 10.1126/science.aaw4325] [Citation(s) in RCA: 547] [Impact Index Per Article: 109.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In 1904, Richard Semon introduced the term "engram" to describe the neural substrate for storing memories. An experience, Semon proposed, activates a subset of cells that undergo off-line, persistent chemical and/or physical changes to become an engram. Subsequent reactivation of this engram induces memory retrieval. Although Semon's contributions were largely ignored in his lifetime, new technologies that allow researchers to image and manipulate the brain at the level of individual neurons has reinvigorated engram research. We review recent progress in studying engrams, including an evaluation of evidence for the existence of engrams, the importance of intrinsic excitability and synaptic plasticity in engrams, and the lifetime of an engram. Together, these findings are beginning to define an engram as the basic unit of memory.
Collapse
Affiliation(s)
- Sheena A Josselyn
- Program in Neurosciences & Mental Health, Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada. .,Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario M5G 1X8, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, Ontario M5S 1A8, Canada.,Brain, Mind & Consciousness Program, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario M5G 1M1, Canada
| | - Susumu Tonegawa
- RIKEN-MIT Laboratory for Neural Circuit Genetics at the Picower Institute for Learning and Memory, Department of Biology and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. .,Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
32
|
Brown RE. Donald O. Hebb and the Organization of Behavior: 17 years in the writing. Mol Brain 2020; 13:55. [PMID: 32252813 PMCID: PMC7137474 DOI: 10.1186/s13041-020-00567-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 02/18/2020] [Indexed: 02/06/2023] Open
Abstract
The Organization of Behavior has played a significant part in the development of behavioural neuroscience for the last 70 years. This book introduced the concepts of the "Hebb synapse", the "Hebbian cell assembly" and the "Phase sequence". The most frequently cited of these is the Hebb synapse, but the cell assembly may be Hebb's most important contribution. Even after 70 years, Hebb's theory is still relevant because it is a general framework for relating behavior to synaptic organization through the development of neural networks. The Organization of Behavior was Hebb's 40th publication. His first published papers in 1937 were on the innate organization of the visual system and he first used the phrase "the organization of behavior" in 1938. However, Hebb wrote a number of unpublished papers between 1932 and 1945 in which he developed the ideas published in The Organization of Behavior. Thus, the concept of the neural organization of behavior was central to Hebb's thinking from the beginning of his academic career. But his thinking about the organization of behavior in 1949 was different from what it was between 1932 and 1937. This paper examines Hebb's early ideas on the neural basis of behavior and attempts to trace the rather arduous series of steps through which he developed these ideas into the book that was published as The Organization of Behavior. Using the 1946 typescript and Hebb's correspondence we can see a number of changes made in the book before it was published. Finally, a number of issues arising from the book, and the importance of the book today are discussed.
Collapse
Affiliation(s)
- Richard E Brown
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, B3H 4R2, Canada.
| |
Collapse
|
33
|
Monaco JD, Hwang GM, Schultz KM, Zhang K. Cognitive swarming in complex environments with attractor dynamics and oscillatory computing. BIOLOGICAL CYBERNETICS 2020; 114:269-284. [PMID: 32236692 PMCID: PMC7183509 DOI: 10.1007/s00422-020-00823-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 02/22/2020] [Indexed: 06/11/2023]
Abstract
Neurobiological theories of spatial cognition developed with respect to recording data from relatively small and/or simplistic environments compared to animals' natural habitats. It has been unclear how to extend theoretical models to large or complex spaces. Complementarily, in autonomous systems technology, applications have been growing for distributed control methods that scale to large numbers of low-footprint mobile platforms. Animals and many-robot groups must solve common problems of navigating complex and uncertain environments. Here, we introduce the NeuroSwarms control framework to investigate whether adaptive, autonomous swarm control of minimal artificial agents can be achieved by direct analogy to neural circuits of rodent spatial cognition. NeuroSwarms analogizes agents to neurons and swarming groups to recurrent networks. We implemented neuron-like agent interactions in which mutually visible agents operate as if they were reciprocally connected place cells in an attractor network. We attributed a phase state to agents to enable patterns of oscillatory synchronization similar to hippocampal models of theta-rhythmic (5-12 Hz) sequence generation. We demonstrate that multi-agent swarming and reward-approach dynamics can be expressed as a mobile form of Hebbian learning and that NeuroSwarms supports a single-entity paradigm that directly informs theoretical models of animal cognition. We present emergent behaviors including phase-organized rings and trajectory sequences that interact with environmental cues and geometry in large, fragmented mazes. Thus, NeuroSwarms is a model artificial spatial system that integrates autonomous control and theoretical neuroscience to potentially uncover common principles to advance both domains.
Collapse
Affiliation(s)
- Joseph D Monaco
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA.
| | - Grace M Hwang
- The Johns Hopkins University/Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Kevin M Schultz
- The Johns Hopkins University/Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Kechen Zhang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA
| |
Collapse
|
34
|
Traub RD, Whittington MA, Maier N, Schmitz D, Nagy JI. Could electrical coupling contribute to the formation of cell assemblies? Rev Neurosci 2019; 31:121-141. [DOI: 10.1515/revneuro-2019-0059] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 07/07/2019] [Indexed: 12/20/2022]
Abstract
Abstract
Cell assemblies and central pattern generators (CPGs) are related types of neuronal networks: both consist of interacting groups of neurons whose collective activities lead to defined functional outputs. In the case of a cell assembly, the functional output may be interpreted as a representation of something in the world, external or internal; for a CPG, the output ‘drives’ an observable (i.e. motor) behavior. Electrical coupling, via gap junctions, is critical for the development of CPGs, as well as for their actual operation in the adult animal. Electrical coupling is also known to be important in the development of hippocampal and neocortical principal cell networks. We here argue that electrical coupling – in addition to chemical synapses – may therefore contribute to the formation of at least some cell assemblies in adult animals.
Collapse
Affiliation(s)
- Roger D. Traub
- AI Foundations, IBM T.J. Watson Research Center , Yorktown Heights, NY 10598 , USA
| | | | - Nikolaus Maier
- Charité-Universitätsmedizin Berlin , Neuroscience Research Center , Charitéplatz 1 , D-10117 Berlin , Germany
| | - Dietmar Schmitz
- Charité-Universitätsmedizin Berlin , Neuroscience Research Center , Charitéplatz 1 , D-10117 Berlin , Germany
| | - James I. Nagy
- Department of Physiology and Pathophysiology , University of Manitoba , Winnipeg R3E OJ9, MB , Canada
| |
Collapse
|
35
|
Beyeler M, Rounds EL, Carlson KD, Dutt N, Krichmar JL. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput Biol 2019; 15:e1006908. [PMID: 31246948 PMCID: PMC6597036 DOI: 10.1371/journal.pcbi.1006908] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Psychology, University of Washington, Seattle, Washington, United States of America
- Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- eScience Institute, University of Washington, Seattle, Washington, United States of America
- Department of Computer Science, University of California, Irvine, California, United States of America
| | - Emily L. Rounds
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Kristofor D. Carlson
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
- Sandia National Laboratories, Albuquerque, New Mexico, United States of America
| | - Nikil Dutt
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Jeffrey L. Krichmar
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| |
Collapse
|
36
|
Recollection in the human hippocampal-entorhinal cell circuitry. Nat Commun 2019; 10:1503. [PMID: 30944325 PMCID: PMC6447634 DOI: 10.1038/s41467-019-09558-3] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2018] [Accepted: 03/18/2019] [Indexed: 01/23/2023] Open
Abstract
Imagine how flicking through your photo album and seeing a picture of a beach sunset brings back fond memories of a tasty cocktail you had that night. Computational models suggest that upon receiving a partial memory cue (‘beach’), neurons in the hippocampus coordinate reinstatement of associated memories (‘cocktail’) in cortical target sites. Here, using human single neuron recordings, we show that hippocampal firing rates are elevated from ~ 500–1500 ms after cue onset during successful associative retrieval. Concurrently, the retrieved target object can be decoded from population spike patterns in adjacent entorhinal cortex (EC), with hippocampal firing preceding EC spikes and predicting the fidelity of EC object reinstatement. Prior to orchestrating reinstatement, a separate population of hippocampal neurons distinguishes different scene cues (buildings vs. landscapes). These results elucidate the hippocampal-entorhinal circuit dynamics for memory recall and reconcile disparate views on the role of the hippocampus in scene processing vs. associative memory. The hippocampus is involved both in episodic memory recall and scene processing. Here, the authors show that hippocampal neurons first process scene cues before coordinating memory-guided pattern completion in adjacent entorhinal cortex.
Collapse
|
37
|
Takamiya S, Yuki S, Hirokawa J, Manabe H, Sakurai Y. Dynamics of memory engrams. Neurosci Res 2019; 153:22-26. [PMID: 30940458 DOI: 10.1016/j.neures.2019.03.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 12/18/2022]
Abstract
In this update article, we focus on "memory engrams", which are traces of long-term memory in the brain, and emphasizes that they are not static but dynamic. We first introduce the major findings in neuroscience and psychology reporting that memory engrams are sometimes diffuse and unstable, indicating that they are dynamically modified processes of consolidation and reconsolidation. Second, we introduce and discuss the concepts of cell assembly and engram cell, the former has been investigated by psychological experiments and behavioral electrophysiology and the latter is defined by recent combination of activity-dependent cell labelling with optogenetics to show causal relationships between cell population activity and behavioral changes. Third, we discuss the similarities and differences between the cell assembly and engram cell concepts to reveal the dynamics of memory engrams. We also discuss the advantages and problems of live-cell imaging, which has recently been developed to visualize multineuronal activities. The last section suggests the experimental strategy and background assumptions for future research of memory engrams. The former encourages recording of cell assemblies from different brain regions during memory consolidation-reconsolidation processes, while the latter emphasizes the multipotentiality of neurons and regions that contribute to dynamics of memory engrams in the working brain.
Collapse
Affiliation(s)
- Shogo Takamiya
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyotanabe 610-0394, Kyoto, Japan
| | - Shoko Yuki
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyotanabe 610-0394, Kyoto, Japan
| | - Junya Hirokawa
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyotanabe 610-0394, Kyoto, Japan
| | - Hiroyuki Manabe
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyotanabe 610-0394, Kyoto, Japan
| | - Yoshio Sakurai
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyotanabe 610-0394, Kyoto, Japan.
| |
Collapse
|
38
|
Berlucchi G, Marzi CA. Neuropsychology of Consciousness: Some History and a Few New Trends. Front Psychol 2019; 10:50. [PMID: 30761035 PMCID: PMC6364520 DOI: 10.3389/fpsyg.2019.00050] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 01/09/2019] [Indexed: 01/24/2023] Open
Abstract
Consciousness is a global activity of the nervous system. Its physiological and pathological mechanisms have been studied in relation to the natural sleep-wake cycle and various forms of normal or morbid unconsciousness, mainly in neurophysiology and clinical neurology. Neuropsychology has been more interested in specific higher brain functions, such as perception and memory and their disorders, rather than in consciousness per se. However, neuropsychology has been at the forefront in the identification of conscious and unconscious components in the processing of sensory and mnestic information. The present review describes some historical steps in the formulation of consciousness as a global brain function with arousal and content as principal ingredients, respectively, instantiated in the subcortex and the neocortex. It then reports a few fresh developments in neuropsychology and cognitive neuroscience which emphasize the importance of the hippocampus for thinking and dreaming. Non-neocortical structures may contribute to the contents of consciousness more than previously believed.
Collapse
Affiliation(s)
- Giovanni Berlucchi
- Department of Neurosciences, Biomedicine and Movement, University of Verona, Verona, Italy
| | | |
Collapse
|
39
|
Helfrich RF, Breska A, Knight RT. Neural entrainment and network resonance in support of top-down guided attention. Curr Opin Psychol 2019; 29:82-89. [PMID: 30690228 DOI: 10.1016/j.copsyc.2018.12.016] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/17/2018] [Accepted: 12/20/2018] [Indexed: 01/17/2023]
Abstract
Which neural mechanisms provide the functional basis of top-down guided cognitive control? Here, we review recent evidence that suggest that the neural basis of attention is inherently rhythmic. In particular, we discuss two physical properties of self-sustained networks, namely entrainment and resonance, and how these shape the timescale of attentional control. Several recent findings revealed theta-band (3-8 Hz) dynamics in top-down guided behavior. These reports were paralleled by intracranial recordings, which implicated theta oscillations in the organization of functional attention networks. We discuss how the intrinsic network architecture shapes covert attentional sampling as well as overt behavior. Taken together, we posit that theta rhythmicity is an inherent feature of the attention network in support of top-down guided goal-directed behavior.
Collapse
Affiliation(s)
- Randolph F Helfrich
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA.
| | - Assaf Breska
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA; Dept. of Psychology, UC Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA; Dept. of Psychology, UC Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| |
Collapse
|
40
|
Helfrich RF, Knight RT. Cognitive neurophysiology of the prefrontal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2019; 163:35-59. [DOI: 10.1016/b978-0-12-804281-6.00003-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
41
|
Flow stimuli reveal ecologically appropriate responses in mouse visual cortex. Proc Natl Acad Sci U S A 2018; 115:11304-11309. [PMID: 30327345 DOI: 10.1073/pnas.1811265115] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Assessments of the mouse visual system based on spatial-frequency analysis imply that its visual capacity is low, with few neurons responding to spatial frequencies greater than 0.5 cycles per degree. However, visually mediated behaviors, such as prey capture, suggest that the mouse visual system is more precise. We introduce a stimulus class-visual flow patterns-that is more like what the mouse would encounter in the natural world than are sine-wave gratings but is more tractable for analysis than are natural images. We used 128-site silicon microelectrodes to measure the simultaneous responses of single neurons in the primary visual cortex (V1) of alert mice. While holding temporal-frequency content fixed, we explored a class of drifting patterns of black or white dots that have energy only at higher spatial frequencies. These flow stimuli evoke strong visually mediated responses well beyond those predicted by spatial-frequency analysis. Flow responses predominate in higher spatial-frequency ranges (0.15-1.6 cycles per degree), many are orientation or direction selective, and flow responses of many neurons depend strongly on sign of contrast. Many cells exhibit distributed responses across our stimulus ensemble. Together, these results challenge conventional linear approaches to visual processing and expand our understanding of the mouse's visual capacity to behaviorally relevant ranges.
Collapse
|
42
|
Helfrich RF, Fiebelkorn IC, Szczepanski SM, Lin JJ, Parvizi J, Knight RT, Kastner S. Neural Mechanisms of Sustained Attention Are Rhythmic. Neuron 2018; 99:854-865.e5. [PMID: 30138591 PMCID: PMC6286091 DOI: 10.1016/j.neuron.2018.07.032] [Citation(s) in RCA: 256] [Impact Index Per Article: 36.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 05/30/2018] [Accepted: 07/19/2018] [Indexed: 11/18/2022]
Abstract
Classic models of attention suggest that sustained neural firing constitutes a neural correlate of sustained attention. However, recent evidence indicates that behavioral performance fluctuates over time, exhibiting temporal dynamics that closely resemble the spectral features of ongoing, oscillatory brain activity. Therefore, it has been proposed that periodic neuronal excitability fluctuations might shape attentional allocation and overt behavior. However, empirical evidence to support this notion is sparse. Here, we address this issue by examining data from large-scale subdural recordings, using two different attention tasks that track perceptual ability at high temporal resolution. Our results reveal that perceptual outcome varies as a function of the theta phase even in states of sustained spatial attention. These effects were robust at the single-subject level, suggesting that rhythmic perceptual sampling is an inherent property of the frontoparietal attention network. Collectively, these findings support the notion that the functional architecture of top-down attention is intrinsically rhythmic.
Collapse
Affiliation(s)
- Randolph F Helfrich
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA; Department of Psychology, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.
| | - Ian C Fiebelkorn
- Princeton Neuroscience Institute, Washington Rd., Princeton, NJ 08544, USA
| | - Sara M Szczepanski
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA
| | - Jack J Lin
- Department of Neurology, UC Irvine, 101 The City Dr., Orange, CA 92868, USA; Department of Biomedical Engineering, Henry Samueli School of Engineering, 402 E. Peltason Dr., Irvine, CA 92617, USA
| | - Josef Parvizi
- Department of Neurology and Neurological Sciences, Stanford University, 300 Pasteur Dr., Stanford, CA 94305, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA; Department of Psychology, UC Berkeley, 130 Barker Hall, Berkeley, CA 94720, USA
| | - Sabine Kastner
- Princeton Neuroscience Institute, Washington Rd., Princeton, NJ 08544, USA; Department of Psychology, Princeton University, South Drive, Princeton, NJ 08540, USA
| |
Collapse
|
43
|
|
44
|
Sakurai Y, Osako Y, Tanisumi Y, Ishihara E, Hirokawa J, Manabe H. Multiple Approaches to the Investigation of Cell Assembly in Memory Research-Present and Future. Front Syst Neurosci 2018; 12:21. [PMID: 29887797 PMCID: PMC5980992 DOI: 10.3389/fnsys.2018.00021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Accepted: 05/02/2018] [Indexed: 11/13/2022] Open
Abstract
In this review article we focus on research methodologies for detecting the actual activity of cell assemblies, which are populations of functionally connected neurons that encode information in the brain. We introduce and discuss traditional and novel experimental methods and those currently in development and briefly discuss their advantages and disadvantages for the detection of cell-assembly activity. First, we introduce the electrophysiological method, i.e., multineuronal recording, and review former and recent examples of studies showing models of dynamic coding by cell assemblies in behaving rodents and monkeys. We also discuss how the firing correlation of two neurons reflects the firing synchrony among the numerous surrounding neurons that constitute cell assemblies. Second, we review the recent outstanding studies that used the novel method of optogenetics to show causal relationships between cell-assembly activity and behavioral change. Third, we review the most recently developed method of live-cell imaging, which facilitates the simultaneous observation of firings of a large number of neurons in behaving rodents. Currently, all these available methods have both advantages and disadvantages, and no single measurement method can directly and precisely detect the actual activity of cell assemblies. The best strategy is to combine the available methods and utilize each of their advantages with the technique of operant conditioning of multiple-task behaviors in animals and, if necessary, with brain-machine interface technology to verify the accuracy of neural information detected as cell-assembly activity.
Collapse
Affiliation(s)
- Yoshio Sakurai
- Laboratory of Neural Information, Graduate School of Brain Science, Doshisha University, Kyoto, Japan
| | | | | | | | | | | |
Collapse
|
45
|
Flexible weighting of diverse inputs makes hippocampal function malleable. Neurosci Lett 2017; 680:13-22. [PMID: 28587901 DOI: 10.1016/j.neulet.2017.05.063] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 05/29/2017] [Accepted: 05/29/2017] [Indexed: 12/17/2022]
Abstract
Classic theories of hippocampal function have emphasized its role as a dedicated memory system, but recent research has shown that it contributes broadly to many aspects of cognition, including attention and perception. We propose that the reason the hippocampus plays such a broad role in cognition is that its function is particularly malleable. We argue that this malleability arises because the hippocampus receives diverse anatomical inputs and these inputs are flexibly weighted based on behavioral goals. We discuss examples of how hippocampal representations can be flexibly weighted, focusing on hippocampal modulation by attention. Finally, we suggest some general neural mechanisms and core hippocampal computations that may enable the hippocampus to support diverse cognitive functions, including attention, perception, and memory. Together, this work suggests that great progress can and has been made in understanding the hippocampus by considering how the domain-general computations it performs allow it to dynamically contribute to many different behaviors.
Collapse
|