1
|
Harris JA. Modelling the acquisition of Pavlovian conditioning. Neurobiol Learn Mem 2025; 219:108059. [PMID: 40300748 DOI: 10.1016/j.nlm.2025.108059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 04/17/2025] [Accepted: 04/26/2025] [Indexed: 05/01/2025]
Abstract
Pavlovian conditioning is a fundamental learning process that allows animals to anticipate and respond to significant environmental events. This review examines the key properties of the relationship between the conditioned stimulus (CS) and unconditioned stimulus (US) that influence learning, focussing on the temporal proximity of the CS and US, the spacing of trials (pairings of the CS and US), and the contingency between the CS and US. These properties have been touchstones for models of associative learning. Two primary theoretical approaches are contrasted here. Connection strength models, exemplified by the Rescorla-Wagner model (Rescorla & Wagner, 1972), describe learning as trial-by-trial changes in the strength of an associative bond based on prediction errors. In time-based models of learning (e.g., Gallistel & Gibbon, 2000) animals encode and remember temporal intervals and rates of reinforcement. The integration of Information Theory into time-based models (Balsam & Gallistel, 2009) provides a mathematical framework for quantifying the effects of proximity, trial spacing, and contingency in terms of how much the CS reduces uncertainty about the US. The present paper incorporates a trial-by-trial Bayesian updating process into the information theoretic account to describe how uncertainty about the CS-US interval changes across conditioning. This Bayesian process is shown to account for empirical evidence about the way that responding changes continuously over conditioning trials.
Collapse
|
2
|
Qian L, Burrell M, Hennig JA, Matias S, Murthy VN, Gershman SJ, Uchida N. Prospective contingency explains behavior and dopamine signals during associative learning. Nat Neurosci 2025:10.1038/s41593-025-01915-4. [PMID: 40102680 DOI: 10.1038/s41593-025-01915-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/06/2025] [Indexed: 03/20/2025]
Abstract
Associative learning depends on contingency, the degree to which a stimulus predicts an outcome. Despite its importance, the neural mechanisms linking contingency to behavior remain elusive. In the present study, we examined the dopamine activity in the ventral striatum-a signal implicated in associative learning-in a Pavlovian contingency degradation task in mice. We show that both anticipatory licking and dopamine responses to a conditioned stimulus decreased when additional rewards were delivered uncued, but remained unchanged if additional rewards were cued. These results conflict with contingency-based accounts using a traditional definition of contingency or a new causal learning model (ANCCR), but can be explained by temporal difference (TD) learning models equipped with an appropriate intertrial interval state representation. Recurrent neural networks trained within a TD framework develop state representations akin to our best 'handcrafted' model. Our findings suggest that the TD error can be a measure that describes both contingency and dopaminergic activity.
Collapse
Affiliation(s)
- Lechen Qian
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Mark Burrell
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Jay A Hennig
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Sara Matias
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Venkatesh N Murthy
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Samuel J Gershman
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Naoshige Uchida
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
3
|
Lee H. Noise Resilience of Successor and Predecessor Feature Algorithms in One- and Two-Dimensional Environments. SENSORS (BASEL, SWITZERLAND) 2025; 25:979. [PMID: 39943618 PMCID: PMC11820235 DOI: 10.3390/s25030979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Revised: 01/25/2025] [Accepted: 02/04/2025] [Indexed: 02/16/2025]
Abstract
Noisy inputs pose significant challenges for reinforcement learning (RL) agents navigating real-world environments. While animals demonstrate robust spatial learning under dynamic conditions, the mechanisms underlying this resilience remain understudied in RL frameworks. This paper introduces a novel comparative analysis of predecessor feature (PF) and successor feature (SF) algorithms under controlled noise conditions, revealing several insights. Our key innovation lies in demonstrating that SF algorithms achieve superior noise resilience compared to traditional approaches, with cumulative rewards of 2216.88±3.83 (mean ± SEM), even under high noise conditions (σ=0.5) in one-dimensional environments, while Q learning achieves only 19.22±0.57. In two-dimensional environments, we discover an unprecedented nonlinear relationship between noise level and algorithm performance, with SF showing optimal performance at moderate noise levels (σ=0.25), achieving cumulative rewards of 2886.03±1.63 compared to 2798.16±3.54 for Q learning. The λ parameter in PF learning is a significant factor, with λ=0.7 consistently achieving higher λ values under most noise conditions. These findings bridge computational neuroscience and RL, offering practical insights for developing noise-resistant learning systems. Our results have direct applications in robotics, autonomous navigation, and sensor-based AI systems, particularly in environments with inherent observational uncertainty.
Collapse
Affiliation(s)
- Hyunsu Lee
- Department of Physiology, School of Medicine, Pusan National University, Busandaehak-ro, Yangsan 50612, Republic of Korea;
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan 50612, Republic of Korea
| |
Collapse
|
4
|
Tan L, Qiu Y, Qiu L, Lin S, Li J, Liao J, Zhang Y, Zou W, Huang R. The medial and lateral orbitofrontal cortex jointly represent the cognitive map of task space. Commun Biol 2025; 8:163. [PMID: 39900714 PMCID: PMC11791032 DOI: 10.1038/s42003-025-07588-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Accepted: 01/21/2025] [Indexed: 02/05/2025] Open
Abstract
A cognitive map is an internal model of the world's causal structure, crucial for adaptive behaviors. The orbitofrontal cortex (OFC) is central node to decision-making and cognitive map representation. However, it remains unclear how the medial OFC (mOFC) and lateral OFC (lOFC) contribute to the formation of cognitive maps in humans. By performing a multi-step sequential task and multivariate analyses of functional magnetic resonance imaging (fMRI) data, we found that the mOFC and lOFC play complementary but dissociable roles in this process. Specifically, the mOFC represents all hidden task state components. The lOFC and dorsolateral prefrontal cortex (dlPFC) encode abstract rules governing structure knowledge across task states. Furthermore, the two orbitofrontal subregions are functionally connected to share state-hidden information for constructing a representation of the task structure. Collectively, these findings provide an account that can increase our understanding of how the brain constructs abstract cognitive maps in a task-relevant space.
Collapse
Affiliation(s)
- Liwei Tan
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Yidan Qiu
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Lixin Qiu
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Shuting Lin
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Jinhui Li
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Jiajun Liao
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Yuting Zhang
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China
| | - Wei Zou
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Ruiwang Huang
- School of Psychology; Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; South China Normal University, Guangzhou, China.
| |
Collapse
|
5
|
Zaki Y, Pennington ZT, Morales-Rodriguez D, Bacon ME, Ko B, Francisco TR, LaBanca AR, Sompolpong P, Dong Z, Lamsifer S, Chen HT, Carrillo Segura S, Christenson Wick Z, Silva AJ, Rajan K, van der Meer M, Fenton A, Shuman T, Cai DJ. Offline ensemble co-reactivation links memories across days. Nature 2025; 637:145-155. [PMID: 39506117 PMCID: PMC11666460 DOI: 10.1038/s41586-024-08168-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 10/08/2024] [Indexed: 11/08/2024]
Abstract
Memories are encoded in neural ensembles during learning1-6 and are stabilized by post-learning reactivation7-17. Integrating recent experiences into existing memories ensures that memories contain the most recently available information, but how the brain accomplishes this critical process remains unclear. Here we show that in mice, a strong aversive experience drives offline ensemble reactivation of not only the recent aversive memory but also a neutral memory formed 2 days before, linking fear of the recent aversive memory to the previous neutral memory. Fear specifically links retrospectively, but not prospectively, to neutral memories across days. Consistent with previous studies, we find that the recent aversive memory ensemble is reactivated during the offline period after learning. However, a strong aversive experience also increases co-reactivation of the aversive and neutral memory ensembles during the offline period. Ensemble co-reactivation occurs more during wake than during sleep. Finally, the expression of fear in the neutral context is associated with reactivation of the shared ensemble between the aversive and neutral memories. Collectively, these results demonstrate that offline ensemble co-reactivation is a neural mechanism by which memories are integrated across days.
Collapse
Affiliation(s)
- Yosif Zaki
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Zachary T Pennington
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Madeline E Bacon
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - BumJin Ko
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Taylor R Francisco
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Alexa R LaBanca
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Patlapa Sompolpong
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Zhe Dong
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sophia Lamsifer
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Hung-Tu Chen
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Simón Carrillo Segura
- Graduate Program in Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, USA
| | - Zoé Christenson Wick
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Alcino J Silva
- Department of Neurobiology, Psychiatry & Biobehavioral Sciences and Psychology, Integrative Center for Learning and Memory, Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, USA
| | - Kanaka Rajan
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - André Fenton
- Center for Neural Science, New York University, New York, NY, USA
- Neuroscience Institute at the NYU Langone Medical Center, New York, NY, USA
| | - Tristan Shuman
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Denise J Cai
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
6
|
Carvalho W, Tomov MS, de Cothi W, Barry C, Gershman SJ. Predictive Representations: Building Blocks of Intelligence. Neural Comput 2024; 36:2225-2298. [PMID: 39212963 DOI: 10.1162/neco_a_01705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 06/10/2024] [Indexed: 09/04/2024]
Abstract
Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This review integrates these theoretical ideas with work on cognition and neuroscience. We pay special attention to the successor representation and its generalizations, which have been widely applied as both engineering tools and models of brain function. This convergence suggests that particular kinds of predictive representations may function as versatile building blocks of intelligence.
Collapse
Affiliation(s)
- Wilka Carvalho
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA 02134, U.S.A.
| | - Momchil S Tomov
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA 02134, U.S.A
- Motional AD LLC, Boston, MA 02210, U.S.A.
| | - William de Cothi
- Department of Cell and Developmental Biology, University College London, London WC1E 7JE, U.K.
| | - Caswell Barry
- Department of Cell and Developmental Biology, University College London, London WC1E 7JE, U.K.
| | - Samuel J Gershman
- Kempner Institute for the Study of Natural and Artificial Intelligence, and Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA 02134, U.S.A
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA 02139, U.S.A.
| |
Collapse
|
7
|
Seo I, Lee H. Investigating Transfer Learning in Noisy Environments: A Study of Predecessor and Successor Features in Spatial Learning Using a T-Maze. SENSORS (BASEL, SWITZERLAND) 2024; 24:6419. [PMID: 39409459 PMCID: PMC11479366 DOI: 10.3390/s24196419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 09/27/2024] [Accepted: 10/02/2024] [Indexed: 10/20/2024]
Abstract
In this study, we investigate the adaptability of artificial agents within a noisy T-maze that use Markov decision processes (MDPs) and successor feature (SF) and predecessor feature (PF) learning algorithms. Our focus is on quantifying how varying the hyperparameters, specifically the reward learning rate (αr) and the eligibility trace decay rate (λ), can enhance their adaptability. Adaptation is evaluated by analyzing the hyperparameters of cumulative reward, step length, adaptation rate, and adaptation step length and the relationships between them using Spearman's correlation tests and linear regression. Our findings reveal that an αr of 0.9 consistently yields superior adaptation across all metrics at a noise level of 0.05. However, the optimal setting for λ varies by metric and context. In discussing these results, we emphasize the critical role of hyperparameter optimization in refining the performance and transfer learning efficacy of learning algorithms. This research advances our understanding of the functionality of PF and SF algorithms, particularly in navigating the inherent uncertainty of transfer learning tasks. By offering insights into the optimal hyperparameter configurations, this study contributes to the development of more adaptive and robust learning algorithms, paving the way for future explorations in artificial intelligence and neuroscience.
Collapse
Affiliation(s)
- Incheol Seo
- Department of Immunology, Kyungpook National University School of Medicine, Daegu 41944, Republic of Korea
| | - Hyunsu Lee
- Department of Physiology, Pusan National University School of Medicine, Yangsan 50612, Republic of Korea
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan 50612, Republic of Korea
| |
Collapse
|
8
|
Sharp PB, Eldar E. Humans adaptively deploy forward and backward prediction. Nat Hum Behav 2024; 8:1726-1737. [PMID: 39014069 PMCID: PMC11878374 DOI: 10.1038/s41562-024-01930-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 06/17/2024] [Indexed: 07/18/2024]
Abstract
The formation of predictions is essential to our ability to build models of the world and use them for intelligent decision-making. Here we challenge the dominant assumption that humans form only forward predictions, which specify what future events are likely to follow a given present event. We demonstrate that in some environments, it is more efficient to use backward prediction, which specifies what present events are likely to precede a given future event. This is particularly the case in diverging environments, where possible future events outnumber possible present events. Correspondingly, in six preregistered experiments (n = 1,299) involving both simple decision-making and more challenging planning tasks, we find that humans engage in backward prediction in divergent environments and use forward prediction in convergent environments. We thus establish that humans adaptively deploy forward and backward prediction in the service of efficient decision-making.
Collapse
Affiliation(s)
- Paul B Sharp
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel.
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel.
- Department of Psychology, Yale University, New Haven, CT, USA.
| | - Eran Eldar
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel.
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
9
|
Cone I, Clopath C, Shouval HZ. Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time. Nat Commun 2024; 15:5856. [PMID: 38997276 PMCID: PMC11245539 DOI: 10.1038/s41467-024-50205-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 07/02/2024] [Indexed: 07/14/2024] Open
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference learning (TD) learning, whereby certain units signal reward prediction errors (RPE). The TD algorithm has been traditionally mapped onto the dopaminergic system, as firing properties of dopamine neurons can resemble RPEs. However, certain predictions of TD learning are inconsistent with experimental results, and previous implementations of the algorithm have made unscalable assumptions regarding stimulus-specific fixed temporal bases. We propose an alternate framework to describe dopamine signaling in the brain, FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, dopamine release is similar, but not identical to RPE, leading to predictions that contrast to those of TD. While FLEX itself is a general theoretical framework, we describe a specific, biophysically plausible implementation, the results of which are consistent with a preponderance of both existing and reanalyzed experimental data.
Collapse
Affiliation(s)
- Ian Cone
- Department of Bioengineering, Imperial College London, London, UK
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA
- Applied Physics Program, Rice University, Houston, TX, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Harel Z Shouval
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
| |
Collapse
|
10
|
Gabriel DB, Havugimana F, Liley AE, Aguilar I, Yeasin M, Simon NW. Lateral Orbitofrontal Cortex Encodes Presence of Risk and Subjective Risk Preference During Decision-Making. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.08.588332. [PMID: 38645204 PMCID: PMC11030364 DOI: 10.1101/2024.04.08.588332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Adaptive decision-making requires consideration of objective risks and rewards associated with each option, as well as subjective preference for risky/safe alternatives. Inaccurate risk/reward estimations can engender excessive risk-taking, a central trait in many psychiatric disorders. The lateral orbitofrontal cortex (lOFC) has been linked to many disorders associated with excessively risky behavior and is ideally situated to mediate risky decision-making. Here, we used single-unit electrophysiology to measure neuronal activity from lOFC of freely moving rats performing in a punishment-based risky decision-making task. Subjects chose between a small, safe reward and a large reward associated with either 0% or 50% risk of concurrent punishment. lOFC activity repeatedly encoded current risk in the environment throughout the decision-making sequence, signaling risk before, during, and after a choice. In addition, lOFC encoded reward magnitude, although this information was only evident during action selection. A Random Forest classifier successfully used neural data accurately to predict the risk of punishment in any given trial, and the ability to predict choice via lOFC activity differentiated between and risk-preferring and risk-averse rats. Finally, risk preferring subjects demonstrated reduced lOFC encoding of risk and increased encoding of reward magnitude. These findings suggest lOFC may serve as a central decision-making hub in which external, environmental information converges with internal, subjective information to guide decision-making in the face of punishment risk.
Collapse
Affiliation(s)
- Daniel B.K. Gabriel
- Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN 46202
| | - Felix Havugimana
- Department of Computer Engineering, University of Memphis, Memphis, TN, 38152
| | - Anna E. Liley
- Institut du Cerveau/Paris Brain Institute, Paris, France, 75013
| | - Ivan Aguilar
- Department of Psychology, University of Memphis, Memphis, TN, 38152
| | - Mohammed Yeasin
- Department of Computer Engineering, University of Memphis, Memphis, TN, 38152
| | | |
Collapse
|
11
|
Zhou M, Wu B, Jeong H, Burke DA, Namboodiri VMK. An open-source behavior controller for associative learning and memory (B-CALM). Behav Res Methods 2024; 56:2695-2710. [PMID: 37464151 PMCID: PMC10898869 DOI: 10.3758/s13428-023-02182-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 07/20/2023]
Abstract
Associative learning and memory, i.e., learning and remembering the associations between environmental stimuli, self-generated actions, and outcomes such as rewards or punishments, are critical for the well-being of animals. Hence, the neural mechanisms underlying these processes are extensively studied using behavioral tasks in laboratory animals. Traditionally, these tasks have been controlled using commercial hardware and software, which limits scalability and accessibility due to their cost. More recently, due to the revolution in microcontrollers or microcomputers, several general-purpose and open-source solutions have been advanced for controlling neuroscientific behavioral tasks. While these solutions have great strength due to their flexibility and general-purpose nature, for the same reasons, they suffer from some disadvantages including the need for considerable programming expertise, limited online visualization, or slower than optimal response latencies for any specific task. Here, to mitigate these concerns, we present an open-source behavior controller for associative learning and memory (B-CALM). B-CALM provides an integrated suite that can control a host of associative learning and memory behaviors. As proof of principle for its applicability, we show data from head-fixed mice learning Pavlovian conditioning, operant conditioning, discrimination learning, as well as a timing task and a choice task. These can be run directly from a user-friendly graphical user interface (GUI) written in MATLAB that controls many independently running Arduino Mega microcontrollers in parallel (one per behavior box). In sum, B-CALM will enable researchers to execute a wide variety of associative learning and memory tasks in a scalable, accurate, and user-friendly manner.
Collapse
Affiliation(s)
- Mingkang Zhou
- Department of Neurology, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Brenda Wu
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Huijeong Jeong
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Dennis A Burke
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Vijay Mohan K Namboodiri
- Department of Neurology, University of California, San Francisco, CA, USA.
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA.
- Weill Institute for Neuroscience, Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, CA, USA.
| |
Collapse
|
12
|
Qian L, Burrell M, Hennig JA, Matias S, Murthy VN, Gershman SJ, Uchida N. The role of prospective contingency in the control of behavior and dopamine signals during associative learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578961. [PMID: 38370735 PMCID: PMC10871210 DOI: 10.1101/2024.02.05.578961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Associative learning depends on contingency, the degree to which a stimulus predicts an outcome. Despite its importance, the neural mechanisms linking contingency to behavior remain elusive. Here we examined the dopamine activity in the ventral striatum - a signal implicated in associative learning - in a Pavlovian contingency degradation task in mice. We show that both anticipatory licking and dopamine responses to a conditioned stimulus decreased when additional rewards were delivered uncued, but remained unchanged if additional rewards were cued. These results conflict with contingency-based accounts using a traditional definition of contingency or a novel causal learning model (ANCCR), but can be explained by temporal difference (TD) learning models equipped with an appropriate inter-trial-interval (ITI) state representation. Recurrent neural networks trained within a TD framework develop state representations like our best 'handcrafted' model. Our findings suggest that the TD error can be a measure that describes both contingency and dopaminergic activity.
Collapse
Affiliation(s)
- Lechen Qian
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- These authors contributed equally
| | - Mark Burrell
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- These authors contributed equally
| | - Jay A. Hennig
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Sara Matias
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Venkatesh. N. Murthy
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Samuel J. Gershman
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Naoshige Uchida
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
13
|
Jeong H, Namboodiri VMK, Jung MW, Andermann ML. Sensory cortical ensembles exhibit differential coupling to ripples in distinct hippocampal subregions. Curr Biol 2023; 33:5185-5198.e4. [PMID: 37995696 PMCID: PMC10842729 DOI: 10.1016/j.cub.2023.10.073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 08/29/2023] [Accepted: 10/31/2023] [Indexed: 11/25/2023]
Abstract
Cortical neurons activated during recent experiences often reactivate with dorsal hippocampal CA1 ripples during subsequent rest. Less is known about cortical interactions with intermediate hippocampal CA1, whose connectivity, functions, and ripple events differ from dorsal CA1. We identified three clusters of putative excitatory neurons in mouse visual cortex that are preferentially excited together with either dorsal or intermediate CA1 ripples or suppressed before both ripples. Neurons in each cluster were evenly distributed across primary and higher visual cortices and co-active even in the absence of ripples. These ensembles exhibited similar visual responses but different coupling to thalamus and pupil-indexed arousal. We observed a consistent activity sequence preceding and predicting ripples: (1) suppression of ripple-suppressed cortical neurons, (2) thalamic silence, and (3) activation of intermediate CA1-ripple-activated cortical neurons. We propose that coordinated dynamics of these ensembles relay visual experiences to distinct hippocampal subregions for incorporation into different cognitive maps.
Collapse
Affiliation(s)
- Huijeong Jeong
- Department of Neurology, University of California, San Francisco, 1651 4th Street, San Francisco, CA 94158, USA; Center for Synaptic Brain Dysfunctions, Institute for Basic Science, 291 Daehak-ro, Daejeon 34141, Republic of Korea; Department of Biological Sciences, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Daejeon 34141, Republic of Korea
| | - Vijay Mohan K Namboodiri
- Department of Neurology, University of California, San Francisco, 1651 4th Street, San Francisco, CA 94158, USA; Neuroscience Graduate Program, University of California, San Francisco, 1651 4th Street, San Francisco, CA 94158, USA; Weill Institute for Neuroscience, Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, 1651 4th Street, San Francisco, CA 94158, USA.
| | - Min Whan Jung
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science, 291 Daehak-ro, Daejeon 34141, Republic of Korea; Department of Biological Sciences, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Daejeon 34141, Republic of Korea.
| | - Mark L Andermann
- Division of Endocrinology, Metabolism, and Diabetes, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA 02215, USA; Department of Neurobiology, Harvard Medical School, 220 Longwood Avenue, Boston, MA 02115, USA.
| |
Collapse
|
14
|
Abstract
The nervous system coordinates various motivated behaviors such as feeding, drinking, and escape to promote survival and evolutionary fitness. Although the precise behavioral repertoires required for distinct motivated behaviors are diverse, common features such as approach or avoidance suggest that common brain substrates are required for a wide range of motivated behaviors. In this Review, I describe a framework by which neural circuits specified for some innate drives regulate the activity of ventral tegmental area (VTA) dopamine neurons to reinforce ongoing or planned actions to fulfill motivational demands. This framework may explain why signaling from VTA dopamine neurons is ubiquitously involved in many types of diverse volitional motivated actions, as well as how sensory and interoceptive cues can initiate specific goal-directed actions.
Collapse
Affiliation(s)
- Garret D Stuber
- Center for the Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA
- Department of Pharmacology, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
15
|
Cone I, Clopath C, Shouval HZ. Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time. RESEARCH SQUARE 2023:rs.3.rs-3289985. [PMID: 37790466 PMCID: PMC10543312 DOI: 10.21203/rs.3.rs-3289985/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Collapse
Affiliation(s)
- Ian Cone
- Department of Bioengineering, Imperial College London, London, United Kingdom
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX
- Applied Physics Program, Rice University, Houston, TX
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Harel Z Shouval
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX
- Department of Electrical and Computer Engineering, Rice University, Houston, TX
| |
Collapse
|
16
|
Zaki Y, Pennington ZT, Morales-Rodriguez D, Francisco TR, LaBanca AR, Dong Z, Lamsifer S, Segura SC, Chen HT, Wick ZC, Silva AJ, van der Meer M, Shuman T, Fenton A, Rajan K, Cai DJ. Aversive experience drives offline ensemble reactivation to link memories across days. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532469. [PMID: 36993254 PMCID: PMC10054942 DOI: 10.1101/2023.03.13.532469] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Memories are encoded in neural ensembles during learning and stabilized by post-learning reactivation. Integrating recent experiences into existing memories ensures that memories contain the most recently available information, but how the brain accomplishes this critical process remains unknown. Here we show that in mice, a strong aversive experience drives the offline ensemble reactivation of not only the recent aversive memory but also a neutral memory formed two days prior, linking the fear from the recent aversive memory to the previous neutral memory. We find that fear specifically links retrospectively, but not prospectively, to neutral memories across days. Consistent with prior studies, we find reactivation of the recent aversive memory ensemble during the offline period following learning. However, a strong aversive experience also increases co-reactivation of the aversive and neutral memory ensembles during the offline period. Finally, the expression of fear in the neutral context is associated with reactivation of the shared ensemble between the aversive and neutral memories. Taken together, these results demonstrate that strong aversive experience can drive retrospective memory-linking through the offline co-reactivation of recent memory ensembles with memory ensembles formed days prior, providing a neural mechanism by which memories can be integrated across days.
Collapse
Affiliation(s)
- Yosif Zaki
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Zachary T. Pennington
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | | | - Taylor R. Francisco
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Alexa R. LaBanca
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Zhe Dong
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Sophia Lamsifer
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Simón Carrillo Segura
- Graduate Program in Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, 11201
| | - Hung-Tu Chen
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | - Zoé Christenson Wick
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Alcino J. Silva
- Department of Neurobiology, Psychiatry & Biobehavioral Sciences, and Psychology, Integrative Center for Learning and Memory, Brain Research Institute, UCLA, Los Angeles, CA 90095
| | | | - Tristan Shuman
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - André Fenton
- Center for Neural Science, New York University, New York, NY, 10003
- Neuroscience Institute at the NYU Langone Medical Center, New York, NY, 10016
| | - Kanaka Rajan
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| | - Denise J. Cai
- Nash Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029
| |
Collapse
|
17
|
Jeong H, Namboodiri VMK, Jung MW, Andermann ML. Sensory cortical ensembles exhibit differential coupling to ripples in distinct hippocampal subregions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.17.533028. [PMID: 36993665 PMCID: PMC10055189 DOI: 10.1101/2023.03.17.533028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Cortical neurons activated during recent experiences often reactivate with dorsal hippocampal CA1 sharp-wave ripples (SWRs) during subsequent rest. Less is known about cortical interactions with intermediate hippocampal CA1, whose connectivity, functions, and SWRs differ from those of dorsal CA1. We identified three clusters of visual cortical excitatory neurons that are excited together with either dorsal or intermediate CA1 SWRs, or suppressed before both SWRs. Neurons in each cluster were distributed across primary and higher visual cortices and co-active even in the absence of SWRs. These ensembles exhibited similar visual responses but different coupling to thalamus and pupil-indexed arousal. We observed a consistent activity sequence: (i) suppression of SWR-suppressed cortical neurons, (ii) thalamic silence, and (iii) activation of the cortical ensemble preceding and predicting intermediate CA1 SWRs. We propose that the coordinated dynamics of these ensembles relay visual experiences to distinct hippocampal subregions for incorporation into different cognitive maps.
Collapse
Affiliation(s)
- Huijeong Jeong
- Department of Neurology, University of California, San Francisco, CA 94158, USA
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science, Daejeon 34141, Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
| | - Vijay Mohan K Namboodiri
- Department of Neurology, University of California, San Francisco, CA 94158, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA 94158, USA
- Weill Institute for Neuroscience, Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco 94158, CA, USA
| | - Min Whan Jung
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science, Daejeon 34141, Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
| | - Mark L. Andermann
- Division of Endocrinology, Metabolism, and Diabetes, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02115 USA
- Lead contact
| |
Collapse
|
18
|
Gallistel CR, Latham PE. Bringing Bayes and Shannon to the Study of Behavioural and Neurobiological Timing and Associative Learning. TIMING & TIME PERCEPTION 2022. [DOI: 10.1163/22134468-bja10069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Abstract
Bayesian parameter estimation and Shannon’s theory of information provide tools for analysing and understanding data from behavioural and neurobiological experiments on interval timing—and from experiments on Pavlovian and operant conditioning, because timing plays a fundamental role in associative learning. In this tutorial, we explain basic concepts behind these tools and show how to apply them to estimating, on a trial-by-trial, reinforcement-by-reinforcement and response-by-response basis, important parameters of timing behaviour and of the neurobiological manifestations of timing in the brain. These tools enable quantification of relevant variables in the trade-off between acting as an ideal observer should act and acting as an ideal agent should act, which is also known as the trade-off between exploration (information gathering) and exploitation (information utilization) in reinforcement learning. They enable comparing the strength of the evidence for a measurable association to the strength of the behavioural evidence that the association has been perceived. A GitHub site and an OSF site give public access to well-documented Matlab and Python code and to raw data to which these tools have been applied.
Collapse
Affiliation(s)
- C. Randy Gallistel
- Professor Emeritus, Rutgers University, 252 7th Ave 10D, New York, NY 10001, USA
| | - Peter E. Latham
- Gatsby Computational Neuroscience Unit, Sainsbury Wellcome Centre or Neural Circuits and Behaviour, 25 Howland St., London WIT 4JG, UK
| |
Collapse
|
19
|
Jeong H, Taylor A, Floeder JR, Lohmann M, Mihalas S, Wu B, Zhou M, Burke DA, Namboodiri VMK. Mesolimbic dopamine release conveys causal associations. Science 2022; 378:eabq6740. [PMID: 36480599 PMCID: PMC9910357 DOI: 10.1126/science.abq6740] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Learning to predict rewards based on environmental cues is essential for survival. It is believed that animals learn to predict rewards by updating predictions whenever the outcome deviates from expectations, and that such reward prediction errors (RPEs) are signaled by the mesolimbic dopamine system-a key controller of learning. However, instead of learning prospective predictions from RPEs, animals can infer predictions by learning the retrospective cause of rewards. Hence, whether mesolimbic dopamine instead conveys a causal associative signal that sometimes resembles RPE remains unknown. We developed an algorithm for retrospective causal learning and found that mesolimbic dopamine release conveys causal associations but not RPE, thereby challenging the dominant theory of reward learning. Our results reshape the conceptual and biological framework for associative learning.
Collapse
Affiliation(s)
- Huijeong Jeong
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Annie Taylor
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Joseph R Floeder
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | | | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Brenda Wu
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Mingkang Zhou
- Department of Neurology, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Dennis A Burke
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Vijay Mohan K Namboodiri
- Department of Neurology, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
- Weill Institute for Neuroscience, Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| |
Collapse
|
20
|
Gallistel CR, Johansson F, Jirenhed DA, Rasmussen A, Ricci M, Hesslow G. Quantitative properties of the creation and activation of a cell-intrinsic duration-encoding engram. Front Comput Neurosci 2022; 16:1019812. [PMID: 36405788 PMCID: PMC9669310 DOI: 10.3389/fncom.2022.1019812] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/21/2022] [Indexed: 11/06/2022] Open
Abstract
The engram encoding the interval between the conditional stimulus (CS) and the unconditional stimulus (US) in eyeblink conditioning resides within a small population of cerebellar Purkinje cells. CSs activate this engram to produce a pause in the spontaneous firing rate of the cell, which times the CS-conditional blink. We developed a Bayesian algorithm that finds pause onsets and offsets in the records from individual CS-alone trials. We find that the pause consists of a single unusually long interspike interval. Its onset and offset latencies and their trial-to-trial variability are proportional to the CS-US interval. The coefficient of variation (CoV = σ/μ) are comparable to the CoVs for the conditional eye blink. The average trial-to-trial correlation between the onset latencies and the offset latencies is close to 0, implying that the onsets and offsets are mediated by two stochastically independent readings of the engram. The onset of the pause is step-like; there is no decline in firing rate between the onset of the CS and the onset of the pause. A single presynaptic spike volley suffices to trigger the reading of the engram; and the pause parameters are unaffected by subsequent volleys. The Fano factors for trial-to-trial variations in the distribution of interspike intervals within the intertrial intervals indicate pronounced non-stationarity in the endogenous spontaneous spiking rate, on which the CS-triggered firing pause supervenes. These properties of the spontaneous firing and of the engram read out may prove useful in finding the cell-intrinsic, molecular-level structure that encodes the CS-US interval.
Collapse
Affiliation(s)
| | - Fredrik Johansson
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Dan-Anders Jirenhed
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Anders Rasmussen
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| | - Matthew Ricci
- Carney Institute for Brain Sciences, Brown University, Providence, RI, United States
| | - Germund Hesslow
- Department of Experimental Medical Science, Faculty of Medicine, Lund University, Lund, Sweden
| |
Collapse
|