1
|
Chiossi HSC, Nardin M, Tkačik G, Csicsvari J. Learning reshapes the hippocampal representation hierarchy. Proc Natl Acad Sci U S A 2025; 122:e2417025122. [PMID: 40063792 PMCID: PMC11929462 DOI: 10.1073/pnas.2417025122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 01/27/2025] [Indexed: 03/25/2025] Open
Abstract
A key feature of biological and artificial neural networks is the progressive refinement of their neural representations with experience. In neuroscience, this fact has inspired several recent studies in sensory and motor systems. However, less is known about how higher associational cortical areas, such as the hippocampus, modify representations throughout the learning of complex tasks. Here, we focus on associative learning, a process that requires forming a connection between the representations of different variables for appropriate behavioral response. We trained rats in a space-context associative task and monitored hippocampal neural activity throughout the entire learning period, over several days. This allowed us to assess changes in the representations of context, movement direction, and position, as well as their relationship to behavior. We identified a hierarchical representational structure in the encoding of these three task variables that was preserved throughout learning. Nevertheless, we also observed changes at the lower levels of the hierarchy where context was encoded. These changes were local in neural activity space and restricted to physical positions where context identification was necessary for correct decision-making, supporting better context decoding and contextual code compression. Our results demonstrate that the hippocampal code not only accommodates hierarchical relationships between different variables but also enables efficient learning through minimal changes in neural activity space. Beyond the hippocampus, our work reveals a representation learning mechanism that might be implemented in other biological and artificial networks performing similar tasks.
Collapse
Affiliation(s)
| | | | - Gašper Tkačik
- Institute of Science and Technology Austria, KlosterneuburgAT-3400, Austria
| | - Jozsef Csicsvari
- Institute of Science and Technology Austria, KlosterneuburgAT-3400, Austria
| |
Collapse
|
2
|
Mishra P, Narayanan R. The enigmatic HCN channels: A cellular neurophysiology perspective. Proteins 2025; 93:72-92. [PMID: 37982354 PMCID: PMC7616572 DOI: 10.1002/prot.26643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/24/2023] [Accepted: 11/09/2023] [Indexed: 11/21/2023]
Abstract
What physiological role does a slow hyperpolarization-activated ion channel with mixed cation selectivity play in the fast world of neuronal action potentials that are driven by depolarization? That puzzling question has piqued the curiosity of physiology enthusiasts about the hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, which are widely expressed across the body and especially in neurons. In this review, we emphasize the need to assess HCN channels from the perspective of how they respond to time-varying signals, while also accounting for their interactions with other co-expressing channels and receptors. First, we illustrate how the unique structural and functional characteristics of HCN channels allow them to mediate a slow negative feedback loop in the neurons that they express in. We present the several physiological implications of this negative feedback loop to neuronal response characteristics including neuronal gain, voltage sag and rebound, temporal summation, membrane potential resonance, inductive phase lead, spike triggered average, and coincidence detection. Next, we argue that the overall impact of HCN channels on neuronal physiology critically relies on their interactions with other co-expressing channels and receptors. Interactions with other channels allow HCN channels to mediate intrinsic oscillations, earning them the "pacemaker channel" moniker, and to regulate spike frequency adaptation, plateau potentials, neurotransmitter release from presynaptic terminals, and spike initiation at the axonal initial segment. We also explore the impact of spatially non-homogeneous subcellular distributions of HCN channels in different neuronal subtypes and their interactions with other channels and receptors. Finally, we discuss how plasticity in HCN channels is widely prevalent and can mediate different encoding, homeostatic, and neuroprotective functions in a neuron. In summary, we argue that HCN channels form an important class of channels that mediate a diversity of neuronal functions owing to their unique gating kinetics that made them a puzzle in the first place.
Collapse
Affiliation(s)
- Poonam Mishra
- Department of Neuroscience, Yale School of MedicineYale UniversityNew HavenConnecticutUSA
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics UnitIndian Institute of ScienceBangaloreIndia
| |
Collapse
|
3
|
Chen J, Zhang C, Hu P, Min B, Wang L. Flexible control of sequence working memory in the macaque frontal cortex. Neuron 2024; 112:3502-3514.e6. [PMID: 39178858 DOI: 10.1016/j.neuron.2024.07.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/10/2024] [Accepted: 07/29/2024] [Indexed: 08/26/2024]
Abstract
To memorize a sequence, one must serially bind each item to its rank order. How the brain controls a given input to bind its associated order in sequence working memory (SWM) remains unexplored. Here, we investigated the neural representations underlying SWM control using electrophysiological recordings in the frontal cortex of macaque monkeys performing forward and backward SWM tasks. Separate and generalizable low-dimensional subspaces for sensory and memory information were found within the same frontal circuitry, and SWM control was reflected in these neural subspaces' organized dynamics. Each item at each rank was sequentially entered into a common sensory subspace and, depending on forward or backward task requirement, flexibly and timely sent into rank-selective SWM subspaces. Neural activity in these SWM subspaces faithfully predicted the recalled item and order information in single error trials. Thus, compositional neural population codes with well-orchestrated dynamics in frontal cortex support the flexible control of SWM.
Collapse
Affiliation(s)
- Jingwen Chen
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Cong Zhang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Peiyao Hu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bin Min
- Lingang Laboratory, Shanghai 200031, China.
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
4
|
Pellegrino A, Stein H, Cayco-Gajic NA. Dimensionality reduction beyond neural subspaces with slice tensor component analysis. Nat Neurosci 2024; 27:1199-1210. [PMID: 38710876 PMCID: PMC11537991 DOI: 10.1038/s41593-024-01626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/20/2024] [Indexed: 05/08/2024]
Abstract
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct 'covariability classes' that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.
Collapse
Affiliation(s)
- Arthur Pellegrino
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Heike Stein
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - N Alex Cayco-Gajic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
| |
Collapse
|
5
|
Tafazoli S, Bouchacourt FM, Ardalan A, Markov NT, Uchimura M, Mattar MG, Daw ND, Buschman TJ. Building compositional tasks with shared neural subspaces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.31.578263. [PMID: 38352540 PMCID: PMC10862921 DOI: 10.1101/2024.01.31.578263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cognition is remarkably flexible; we are able to rapidly learn and perform many different tasks1. Theoretical modeling has shown artificial neural networks trained to perform multiple tasks will re-use representations2 and computational components3 across tasks. By composing tasks from these sub-components, an agent can flexibly switch between tasks and rapidly learn new tasks4. Yet, whether such compositionality is found in the brain is unknown. Here, we show the same subspaces of neural activity represent task-relevant information across multiple tasks, with each task compositionally combining these subspaces in a task-specific manner. We trained monkeys to switch between three compositionally related tasks. Neural recordings found task-relevant information about stimulus features and motor actions were represented in subspaces of neural activity that were shared across tasks. When monkeys performed a task, neural representations in the relevant shared sensory subspace were transformed to the relevant shared motor subspace. Subspaces were flexibly engaged as monkeys discovered the task in effect; their internal belief about the current task predicted the strength of representations in task-relevant subspaces. In sum, our findings suggest that the brain can flexibly perform multiple tasks by compositionally combining task-relevant neural representations across tasks.
Collapse
Affiliation(s)
- Sina Tafazoli
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Adel Ardalan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Nikola T. Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Motoaki Uchimura
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Nathaniel D. Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Timothy J. Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Department of Psychology, Princeton University, Princeton, NJ, USA
| |
Collapse
|
6
|
Verzelli P, Tchumatchenko T, Kotaleski JH. Editorial overview: Computational neuroscience as a bridge between artificial intelligence, modeling and data. Curr Opin Neurobiol 2024; 84:102835. [PMID: 38183889 DOI: 10.1016/j.conb.2023.102835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2024]
Affiliation(s)
- Pietro Verzelli
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany. https://twitter.com/FascinoMaligno
| | - Tatjana Tchumatchenko
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany.
| | - Jeanette Hellgren Kotaleski
- Department of Computer Science, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden; Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|