1
|
Levcik D, Sugi AH, Aguilar-Rivera M, Pochapski JA, Baltazar G, Pulido LN, Villas-Boas CA, Fuentes-Flores R, Nicola SM, Da Cunha C. Nucleus Accumbens Shell Neurons Encode the Kinematics of Reward Approach Locomotion. Neuroscience 2023; 524:181-196. [PMID: 37330195 PMCID: PMC10527230 DOI: 10.1016/j.neuroscience.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/30/2023] [Accepted: 06/02/2023] [Indexed: 06/19/2023]
Abstract
The nucleus accumbens (NAc) is considered an interface between motivation and action, with NAc neurons playing an important role in promoting reward approach. However, the encoding by NAc neurons that contributes to this role remains unknown. We recorded 62 NAc neurons in male Wistar rats (n = 5) running towards rewarded locations in an 8-arm radial maze. Variables related to locomotor approach kinematics were the best predictors of the firing rate for most NAc neurons. Nearly 18% of the recorded neurons were inhibited during the entire approach run (locomotion-off cells), suggesting that reduction in firing of these neurons promotes initiation of locomotor approach. 27% of the neurons presented a peak of activity during acceleration followed by a valley during deceleration (acceleration-on cells). Together, these neurons accounted for most of the speed and acceleration encoding identified in our analysis. In contrast, a further 16% of neurons presented a valley during acceleration followed by a peak just prior to or after reaching reward (deceleration-on cells). These findings suggest that these three classes of NAc neurons influence the time course of speed changes during locomotor approach to reward.
Collapse
Affiliation(s)
- David Levcik
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Institute of Physiology of the Czech Academy of Sciences, Videnska 1083, 142 20 Prague, Czech Republic
| | - Adam H Sugi
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Department of Pharmacology, Universidade Federal do Paraná, Curitiba, Brazil; Department of Biochemistry, Universidade Federal do Paraná, Curitiba, Brazil
| | - Marcelo Aguilar-Rivera
- Department of Bioengineering, University of California, 9500 Gilman Drive MC 0412, La Jolla, San Diego 92093, USA
| | - José A Pochapski
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Department of Pharmacology, Universidade Federal do Paraná, Curitiba, Brazil
| | - Gabriel Baltazar
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Department of Pharmacology, Universidade Federal do Paraná, Curitiba, Brazil; Department of Biochemistry, Universidade Federal do Paraná, Curitiba, Brazil
| | - Laura N Pulido
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Department of Pharmacology, Universidade Federal do Paraná, Curitiba, Brazil
| | - Cyrus A Villas-Boas
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil
| | - Romulo Fuentes-Flores
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Av. Independencia 1027, Independencia 8380453, Santiago, Chile
| | - Saleem M Nicola
- Department of Neuroscience, Albert Einstein College of Medicine, 1300 Morris Park Ave, Bronx, NY 10461, USA; Department of Psychiatry, Albert Einstein College of Medicine, New York, USA
| | - Claudio Da Cunha
- Laboratório de Fisiologia e Farmacologia do Sistema Nervoso Central, Universidade Federal do Paraná, 81531-980 Curitiba, Brazil; Department of Pharmacology, Universidade Federal do Paraná, Curitiba, Brazil; Department of Biochemistry, Universidade Federal do Paraná, Curitiba, Brazil.
| |
Collapse
|
2
|
Time as the fourth dimension in the hippocampus. Prog Neurobiol 2020; 199:101920. [PMID: 33053416 DOI: 10.1016/j.pneurobio.2020.101920] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 08/18/2020] [Accepted: 10/07/2020] [Indexed: 12/17/2022]
Abstract
Experiences of animal and human beings are structured by the continuity of space and time coupled with the uni-directionality of time. In addition to its pivotal position in spatial processing and navigation, the hippocampal system also plays a central, multiform role in several types of temporal processing. These include timing and sequence learning, at scales ranging from meso-scales of seconds to macro-scales of minutes, hours, days and beyond, encompassing the classical functions of short term memory, working memory, long term memory, and episodic memories (comprised of information about when, what, and where). This review article highlights the principal findings and behavioral contexts of experiments in rats showing: 1) timing: tracking time during delays by hippocampal 'time cells' and during free behavior by hippocampal-afferent lateral entorhinal cortex ramping cells; 2) 'online' sequence processing: activity coding sequences of events during active behavior; 3) 'off-line' sequence replay: during quiescence or sleep, orderly reactivation of neuronal assemblies coding awake sequences. Studies in humans show neurophysiological correlates of episodic memory comparable to awake replay. Neural mechanisms are discussed, including ion channel properties, plateau and ramping potentials, oscillations of excitation and inhibition of population activity, bursts of high amplitude discharges (sharp wave ripples), as well as short and long term synaptic modifications among and within cell assemblies. Specifically conceived neural network models will suggest processes supporting the emergence of scalar properties (Weber's law), and include different classes of feedforward and recurrent network models, with intrinsic hippocampal coding for 'transitions' (sequencing of events or places).
Collapse
|
3
|
Tallot L, Doyère V. Neural encoding of time in the animal brain. Neurosci Biobehav Rev 2020; 115:146-163. [DOI: 10.1016/j.neubiorev.2019.12.033] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 10/23/2019] [Accepted: 12/03/2019] [Indexed: 01/25/2023]
|
4
|
Burton AC, Bissonette GB, Vazquez D, Blume EM, Donnelly M, Heatley KC, Hinduja A, Roesch MR. Previous cocaine self-administration disrupts reward expectancy encoding in ventral striatum. Neuropsychopharmacology 2018; 43:2350-2360. [PMID: 29728645 PMCID: PMC6180050 DOI: 10.1038/s41386-018-0058-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 03/06/2018] [Accepted: 03/27/2018] [Indexed: 01/16/2023]
Abstract
The nucleus accumbens core (NAc) is important for integrating and providing information to downstream areas about the timing and value of anticipated reward. Although NAc is one of the first brain regions to be affected by drugs of abuse, we still do not know how neural correlates related to reward expectancy are affected by previous cocaine self-administration. To address this issue, we recorded from single neurons in the NAc of rats that had previously self-administered cocaine or sucrose (control). Neural recordings were then taken while rats performed an odor-guided decision-making task in which we independently manipulated value of expected reward by changing the delay to or size of reward across a series of trial blocks. We found that previous cocaine self-administration made rats more impulsive, biasing choice behavior toward more immediate reward. Further, compared to controls, cocaine-exposed rats showed significantly fewer neurons in the NAc that were responsive during odor cues and reward delivery, and in the reward-responsive neurons that remained, diminished directional and value encoding was observed. Lastly, we found that after cocaine exposure, reward-related firing during longer delays was reduced compared to controls. These results demonstrate that prior cocaine self-administration alters reward-expectancy encoding in NAc, which could contribute to poor decision making observed after chronic cocaine use.
Collapse
Affiliation(s)
- Amanda C Burton
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
- Program in Neuroscience and Cognitive Science, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Gregory B Bissonette
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
- Program in Neuroscience and Cognitive Science, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Daniela Vazquez
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Elyse M Blume
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Maria Donnelly
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Kendall C Heatley
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Abhishek Hinduja
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA
| | - Matthew R Roesch
- Department of Psychology, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA.
- Program in Neuroscience and Cognitive Science, 1147 Biology-Psychology Building University of Maryland, College Park, MD, 20742, USA.
| |
Collapse
|
5
|
Gmaz JM, Carmichael JE, van der Meer MA. Persistent coding of outcome-predictive cue features in the rat nucleus accumbens. eLife 2018; 7:37275. [PMID: 30234485 PMCID: PMC6195350 DOI: 10.7554/elife.37275] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 09/15/2018] [Indexed: 01/09/2023] Open
Abstract
The nucleus accumbens (NAc) is important for learning from feedback, and for biasing and invigorating behaviour in response to cues that predict motivationally relevant outcomes. NAc encodes outcome-related cue features such as the magnitude and identity of reward. However, little is known about how features of cues themselves are encoded. We designed a decision making task where rats learned multiple sets of outcome-predictive cues, and recorded single-unit activity in the NAc during performance. We found that coding of cue identity and location occurred alongside coding of expected outcome. Furthermore, this coding persisted both during a delay period, after the rat made a decision and was waiting for an outcome, and after the outcome was revealed. Encoding of cue features in the NAc may enable contextual modulation of on-going behaviour, and provide an eligibility trace of outcome-predictive stimuli for updating stimulus-outcome associations to inform future behaviour.
Collapse
Affiliation(s)
- Jimmie M Gmaz
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, United States
| | - James E Carmichael
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, United States
| | | |
Collapse
|
6
|
Abstract
The striatum controls food-related actions and consumption and is linked to feeding disorders, including obesity and anorexia nervosa. Two populations of neurons project from the striatum: direct pathway medium spiny neurons and indirect pathway medium spiny neurons. The selective contribution of direct pathway medium spiny neurons and indirect pathway medium spiny neurons to food-related actions and consumption remains unknown. Here, we used in vivo electrophysiology and fiber photometry in mice (of both sexes) to record both spiking activity and pathway-specific calcium activity of dorsal striatal neurons during approach to and consumption of food pellets. While electrophysiology revealed complex task-related dynamics across neurons, population calcium was enhanced during approach and inhibited during consumption in both pathways. We also observed ramping changes in activity that preceded both pellet-directed actions and spontaneous movements. These signals were heterogeneous in the spiking units, with neurons exhibiting either increasing or decreasing ramps. In contrast, the population calcium signals were homogeneous, with both pathways having increasing ramps of activity for several seconds before actions were initiated. An analysis comparing population firing rates to population calcium signals also revealed stronger ramping dynamics in the calcium signals than in the spiking data. In a second experiment, we trained the mice to perform an action sequence to evaluate when the ramping signals terminated. We found that the ramping signals terminated at the beginning of the action sequence, suggesting they may reflect upcoming actions and not preconsumption activity. Plasticity of such mechanisms may underlie disorders that alter action selection, such as drug addiction or obesity.SIGNIFICANCE STATEMENT Alterations in striatal function have been linked to pathological consumption in disorders, such as obesity and drug addiction. We recorded spiking and population calcium activity from the dorsal striatum during ad libitum feeding and an operant task that resulted in mice obtaining food pellets. Dorsal striatal neurons exhibited long ramps in activity that preceded actions by several seconds, and may reflect upcoming actions. Understanding how the striatum controls the preparation and generation of actions may lead to improved therapies for disorders, such as drug addiction or obesity.
Collapse
|
7
|
Dejean C, Sitko M, Girardeau P, Bennabi A, Caillé S, Cador M, Boraud T, Le Moine C. Memories of Opiate Withdrawal Emotional States Correlate with Specific Gamma Oscillations in the Nucleus Accumbens. Neuropsychopharmacology 2017; 42:1157-1168. [PMID: 27922595 PMCID: PMC5506790 DOI: 10.1038/npp.2016.272] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 11/16/2016] [Accepted: 11/18/2016] [Indexed: 01/14/2023]
Abstract
Affective memories associated with the negative emotional state experienced during opiate withdrawal are central in maintaining drug taking, seeking, and relapse. Nucleus accumbens (NAC) is a key structure for both acute withdrawal and withdrawal memories reactivation, but the NAC neuron coding properties underpinning the expression of these memories remain largely unknown. Here we aimed at deciphering the role of NAC neurons in the encoding and retrieval of opiate withdrawal memory. Chronic single neuron and local field potentials recordings were performed in morphine-dependent rats and placebo controls. Animals were subjected to an unbiased conditioned placed aversion protocol with one compartment (CS+) paired with naloxone-precipitated withdrawal, a second compartment with saline injection (CS-), and a third being neutral (no pairing). After conditioning, animals displayed a typical place aversion for CS+ and developed a preference for CS- characteristic of safety learning. We found that distinct NAC neurons code for CS+ or CS-. Both populations also displayed highly specific oscillatory dynamics, CS+ and CS- neurons, respectively, following 80 Hz (G80) and 60 Hz (G60) local field potential gamma rhythms. Finally, we found that the balance between G60 and G80 rhythms strongly correlated both with the ongoing behavior of the animal and the strength of the conditioning. We demonstrate here that the aversive and preferred environments are underpinned by distinct groups of NAC neurons as well as specific oscillatory dynamics. This suggest that G60/G80 interplay-established through the conditioning process-serves as a robust and versatile mechanism for a fine coding of the environment emotional weight.
Collapse
Affiliation(s)
- Cyril Dejean
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France
| | - Mathieu Sitko
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France
| | - Paul Girardeau
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France
| | - Amine Bennabi
- Université de Bordeaux, I2M, UMR 5295, Bordeaux, France,CNRS, I2M, UMR 5295, Bordeaux, France
| | - Stéphanie Caillé
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France
| | - Martine Cador
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France
| | - Thomas Boraud
- Université de Bordeaux, IMN, UMR 5293, Bordeaux, France,CNRS, IMN, UMR 5293, Bordeaux, France
| | - Catherine Le Moine
- Université de Bordeaux, INCIA, UMR 5287, Bordeaux, France,CNRS, INCIA, UMR 5287, Bordeaux, France,Université de Bordeaux, INCIA ‘Institut de Neurosciences Cognitives et Intégratives d'Aquitaine’, CNRS UMR 5287, Equipe ‘Neuropsychopharmacologie de l'Addiction’, BP31, 146 rue Léo Saignat, Bordeaux, Cedex 33076, France, Tel: +33 5 57 57 15 44, Fax: +33 5 56 90 02 78, E-mail:
| |
Collapse
|
8
|
Neuronal activity in dorsomedial and dorsolateral striatum under the requirement for temporal credit assignment. Sci Rep 2016; 6:27056. [PMID: 27245401 PMCID: PMC4887996 DOI: 10.1038/srep27056] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Accepted: 05/13/2016] [Indexed: 11/17/2022] Open
Abstract
To investigate neural processes underlying temporal credit assignment in the striatum, we recorded neuronal activity in the dorsomedial and dorsolateral striatum (DMS and DLS, respectively) of rats performing a dynamic foraging task in which a choice has to be remembered until its outcome is revealed for correct credit assignment. Choice signals appeared sequentially, initially in the DMS and then in the DLS, and they were combined with action value and reward signals in the DLS when choice outcome was revealed. Unlike in conventional dynamic foraging tasks, neural signals for chosen value were elevated in neither brain structure. These results suggest that dynamics of striatal neural signals related to evaluating choice outcome might differ drastically depending on the requirement for temporal credit assignment. In a behavioral context requiring temporal credit assignment, the DLS, but not the DMS, might be in charge of updating the value of chosen action by integrating choice, action value, and reward signals together.
Collapse
|
9
|
Lloyd K, Dayan P. Tamping Ramping: Algorithmic, Implementational, and Computational Explanations of Phasic Dopamine Signals in the Accumbens. PLoS Comput Biol 2015; 11:e1004622. [PMID: 26699940 PMCID: PMC4689534 DOI: 10.1371/journal.pcbi.1004622] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Accepted: 10/25/2015] [Indexed: 11/26/2022] Open
Abstract
Substantial evidence suggests that the phasic activity of dopamine neurons represents reinforcement learning’s temporal difference prediction error. However, recent reports of ramp-like increases in dopamine concentration in the striatum when animals are about to act, or are about to reach rewards, appear to pose a challenge to established thinking. This is because the implied activity is persistently predictable by preceding stimuli, and so cannot arise as this sort of prediction error. Here, we explore three possible accounts of such ramping signals: (a) the resolution of uncertainty about the timing of action; (b) the direct influence of dopamine over mechanisms associated with making choices; and (c) a new model of discounted vigour. Collectively, these suggest that dopamine ramps may be explained, with only minor disturbance, by standard theoretical ideas, though urgent questions remain regarding their proximal cause. We suggest experimental approaches to disentangling which of the proposed mechanisms are responsible for dopamine ramps. Dopamine has long been implicated in reward-motivated behaviour. Theory and experiments suggest that activity of dopamine-containing neurons resembles a temporally-sophisticated prediction error used to learn expectations of future reward. This account would appear to be inconsistent with recent observations of ‘ramps’, i.e., gradual increases in extracellular dopamine concentration prior to the execution of actions or the acquisition of rewards. We explore three different possible explanations of such ramping signals as arising: (a) when subjects experience uncertainty about when actions will be executed; (b) when dopamine itself influences the timecourse of choice; and (c) under a new model in which ‘quasi-tonic’ dopamine signals arise through a form of temporal discounting. We thereby show that dopamine ramps can be integrated with current theories, and also suggest experiments to clarify which mechanisms are involved.
Collapse
Affiliation(s)
- Kevin Lloyd
- Gatsby Computational Neuroscience Unit, London, United Kingdom
- * E-mail:
| | - Peter Dayan
- Gatsby Computational Neuroscience Unit, London, United Kingdom
| |
Collapse
|
10
|
Abstract
Both animals and humans often prefer rewarding options that are nearby over those that are distant, but the neural mechanisms underlying this bias are unclear. Here we present evidence that a proximity signal encoded by neurons in the nucleus accumbens drives proximate reward bias by promoting impulsive approach to nearby reward-associated objects. On a novel decision-making task, rats chose the nearer option even when it resulted in greater effort expenditure and delay to reward; therefore, proximate reward bias was unlikely to be caused by effort or delay discounting. The activity of individual neurons in the nucleus accumbens did not consistently encode the reward or effort associated with specific alternatives, suggesting that it does not participate in weighing the values of options. In contrast, proximity encoding was consistent and did not depend on the subsequent choice, implying that accumbens activity drives approach to the nearest rewarding option regardless of its specific associated reward size or effort level.
Collapse
|
11
|
Kaveri S, Nakahara H. Dual reward prediction components yield Pavlovian sign- and goal-tracking. PLoS One 2014; 9:e108142. [PMID: 25310184 PMCID: PMC4195585 DOI: 10.1371/journal.pone.0108142] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2013] [Accepted: 08/26/2014] [Indexed: 11/18/2022] Open
Abstract
Reinforcement learning (RL) has become a dominant paradigm for understanding animal behaviors and neural correlates of decision-making, in part because of its ability to explain Pavlovian conditioned behaviors and the role of midbrain dopamine activity as reward prediction error (RPE). However, recent experimental findings indicate that dopamine activity, contrary to the RL hypothesis, may not signal RPE and differs based on the type of Pavlovian response (e.g. sign- and goal-tracking responses). In this study, we address this discrepancy by introducing a new neural correlate for learning reward predictions; the correlate is called "cue-evoked reward". It refers to a recall of reward evoked by the cue that is learned through simple cue-reward associations. We introduce a temporal difference learning model, in which neural correlates of the cue itself and cue-evoked reward underlie learning of reward predictions. The animal's reward prediction supported by these two correlates is divided into sign and goal components respectively. We relate the sign and goal components to approach responses towards the cue (i.e. sign-tracking) and the food-tray (i.e. goal-tracking) respectively. We found a number of correspondences between simulated models and the experimental findings (i.e. behavior and neural responses). First, the development of modeled responses is consistent with those observed in the experimental task. Second, the model's RPEs were similar to dopamine activity in respective response groups. Finally, goal-tracking, but not sign-tracking, responses rapidly emerged when RPE was restored in the simulated models, similar to experiments with recovery from dopamine-antagonist. These results suggest two complementary neural correlates, corresponding to the cue and its evoked reward, form the basis for learning reward predictions in the sign- and goal-tracking rats.
Collapse
Affiliation(s)
- Sivaramakrishnan Kaveri
- Lab for Integrated Theoretical Neuroscience, RIKEN BSI, Wako, Japan
- Dept. of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan
- * E-mail:
| | | |
Collapse
|
12
|
Amita H, Matsushima T. Competitor suppresses neuronal representation of food reward in the nucleus accumbens/medial striatum of domestic chicks. Behav Brain Res 2014; 268:139-49. [PMID: 24726841 DOI: 10.1016/j.bbr.2014.04.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Revised: 03/31/2014] [Accepted: 04/04/2014] [Indexed: 01/15/2023]
Abstract
To investigate the role of social contexts in controlling the neuronal representation of food reward, we recorded single neuron activity in the medial striatum/nucleus accumbens of domestic chicks and examined whether activities differed between two blocks with different contexts. Chicks were trained in an operant task to associate light-emitting diode color cues with three trial types that differed in the type of food reward: no reward (S-), a small reward/short-delay option (SS), and a large reward/long-delay alternative (LL). Amount and duration of reward were set such that both of SS and LL were chosen roughly equally. Neurons showing distinct cue-period activity in rewarding trials (SS and LL) were identified during an isolation block, and activity patterns were compared with those recorded from the same neuron during a subsequent pseudo-competition block in which another chick was allowed to forage in the same area, but was separated by a transparent window. In some neurons, cue-period activity was lower in the pseudo-competition block, and the difference was not ascribed to the number of repeated trials. Comparison at neuronal population level revealed statistically significant suppression in the pseudo-competition block in both SS and LL trials, suggesting that perceived competition generally suppressed the representation of cue-associated food reward. The delay- and reward-period activities, however, did not significantly different between blocks. These results demonstrate that visual perception of a competitive forager per se weakens the neuronal representation of predicted food reward. Possible functional links to impulse control are discussed.
Collapse
Affiliation(s)
- Hidetoshi Amita
- Graduate School of Life Science, Hokkaido University, N10-W8, Kita-ku, Sapporo 060-0810, Japan; JSPS Fellow (Japan Society for Promotion of Sciences), Ichiban-cho 8, Chiyoda-ku, Tokyo 102-8471, Japan.
| | - Toshiya Matsushima
- Department of Biology, Faculty of Science, Hokkaido University, N10-W8, Kita-ku, Sapporo 060-0810, Japan.
| |
Collapse
|
13
|
Understanding decision neuroscience: a multidisciplinary perspective and neural substrates. PROGRESS IN BRAIN RESEARCH 2013; 202:239-66. [PMID: 23317836 DOI: 10.1016/b978-0-444-62604-2.00014-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
The neuroscience of decision making is a rapidly evolving multidisciplinary research area that employs neuroscientific techniques to explain various parameters associated with decision making behavior. In this chapter, we emphasize the role of multiple disciplines such as psychology, economics, neuroscience, and computational approaches in understanding the phenomenon of decision making. Further, we present a theoretical approach that suggests understanding the building blocks of decision making as bottom-up processes and integrate these with top-down modulatory factors. Relevant neurophysiological and neuroimaging findings that have used the building-block approach are reviewed. A unifying framework emphasizing multidisciplinary views would bring further insights into the active research area of decision making. Pointing to future directions for research, we focus on the role of computational approaches in such a unifying framework.
Collapse
|
14
|
Khamassi M, Enel P, Dominey PF, Procyk E. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters. PROGRESS IN BRAIN RESEARCH 2013; 202:441-64. [PMID: 23317844 DOI: 10.1016/b978-0-444-62604-2.00022-8] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments.
Collapse
Affiliation(s)
- Mehdi Khamassi
- INSERM U846, Stem Cell and Brain Research Institute, Bron, France.
| | | | | | | |
Collapse
|
15
|
Khamassi M, Humphries MD. Integrating cortico-limbic-basal ganglia architectures for learning model-based and model-free navigation strategies. Front Behav Neurosci 2012. [PMID: 23205006 PMCID: PMC3506961 DOI: 10.3389/fnbeh.2012.00079] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Behavior in spatial navigation is often organized into map-based (place-driven) vs. map-free (cue-driven) strategies; behavior in operant conditioning research is often organized into goal-directed vs. habitual strategies. Here we attempt to unify the two. We review one powerful theory for distinct forms of learning during instrumental conditioning, namely model-based (maintaining a representation of the world) and model-free (reacting to immediate stimuli) learning algorithms. We extend these lines of argument to propose an alternative taxonomy for spatial navigation, showing how various previously identified strategies can be distinguished as “model-based” or “model-free” depending on the usage of information and not on the type of information (e.g., cue vs. place). We argue that identifying “model-free” learning with dorsolateral striatum and “model-based” learning with dorsomedial striatum could reconcile numerous conflicting results in the spatial navigation literature. From this perspective, we further propose that the ventral striatum plays key roles in the model-building process. We propose that the core of the ventral striatum is positioned to learn the probability of action selection for every transition between states of the world. We further review suggestions that the ventral striatal core and shell are positioned to act as “critics” contributing to the computation of a reward prediction error for model-free and model-based systems, respectively.
Collapse
Affiliation(s)
- Mehdi Khamassi
- Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie Paris, France ; Centre National de la Recherche Scientifique, UMR7222 Paris, France
| | | |
Collapse
|
16
|
Catanese J, Cerasti E, Zugaro M, Viggiano A, Wiener SI. Dynamics of decision-related activity in hippocampus. Hippocampus 2012; 22:1901-11. [PMID: 22535656 DOI: 10.1002/hipo.22025] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2012] [Indexed: 11/07/2022]
Abstract
Place-selective activity in hippocampal neurons can be modulated by the trajectory that will be taken in the immediate future ("prospective coding"), information that could be useful in neural processes elaborating choices in route planning. To determine if and how hippocampal prospective neurons participate in decision making, we measured the time course of the evolution of prospective activity by recording place responses in rats performing a T-maze alternation task. After five or seven alternation trials, the routine was unpredictably interrupted by a photodetector-triggered visual cue as the rat crossed the middle of central arm, signaling it to suddenly change its intended choice. Comparison of the delays between light cue presentation and the onset of prospective activity for neurons with firing fields at various locations after the trigger point revealed a 420 ms processing delay. This surprisingly long delay indicates that prospective activity in the hippocampus appears much too late to generate planning or decision signals. This provides yet another example of a prominent brain activity that is unlikely to play a functional role in the cognitive function that it appears to represent (planning future trajectories). Nonetheless, the hippocampus may provide other contextual information to areas active at the earliest stages of selecting future paths, which would then return signals that help establish hippocampal prospective activity.
Collapse
Affiliation(s)
- Julien Catanese
- Collège de France, Laboratoire de Physiologie de la Perception et de l'Action, Paris, France
| | | | | | | | | |
Collapse
|
17
|
The role of serotonin in the regulation of patience and impulsivity. Mol Neurobiol 2012; 45:213-24. [PMID: 22262065 PMCID: PMC3311865 DOI: 10.1007/s12035-012-8232-6] [Citation(s) in RCA: 99] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2011] [Accepted: 01/02/2012] [Indexed: 01/14/2023]
Abstract
Classic theories suggest that central serotonergic neurons are involved in the behavioral inhibition that is associated with the prediction of negative rewards or punishment. Failed behavioral inhibition can cause impulsive behaviors. However, the behavioral inhibition that results from predicting punishment is not sufficient to explain some forms of impulsive behavior. In this article, we propose that the forebrain serotonergic system is involved in “waiting to avoid punishment” for future punishments and “waiting to obtain reward” for future rewards. Recently, we have found that serotonergic neurons increase their tonic firing rate when rats await food and water rewards and conditioned reinforcer tones. The rate of tonic firing during the delay period was significantly higher when rats were waiting for rewards than for tones, and rats were unable to wait as long for tones as for rewards. These results suggest that increased serotonergic neuronal firing facilitates waiting behavior when there is the prospect of a forthcoming reward and that serotonergic activation contributes to the patience that allows rats to wait longer. We propose a working hypothesis to explain how the serotonergic system regulates patience while waiting for future rewards.
Collapse
|
18
|
Roesch MR, Bryden DW. Impact of size and delay on neural activity in the rat limbic corticostriatal system. Front Neurosci 2011; 5:130. [PMID: 22363252 PMCID: PMC3277262 DOI: 10.3389/fnins.2011.00130] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2011] [Accepted: 11/04/2011] [Indexed: 11/17/2022] Open
Abstract
A number of factors influence an animal’s economic decisions. Two most commonly studied are the magnitude of and delay to reward. To investigate how these factors are represented in the firing rates of single neurons, we devised a behavioral task that independently manipulated the expected delay to and size of reward. Rats perceived the differently delayed and sized rewards as having different values and were more motivated under short delay and big-reward conditions than under long delay and small reward conditions as measured by percent choice, accuracy, and reaction time. Since the creation of this task, we have recorded from several different brain areas including, orbitofrontal cortex, striatum, amygdala, substantia nigra pars reticulata, and midbrain dopamine neurons. Here, we review and compare those data with a substantial focus on those areas that have been shown to be critical for performance on classic time discounting procedures and provide a potential mechanism by which they might interact when animals are deciding between differently delayed rewards. We found that most brain areas in the cortico-limbic circuit encode both the magnitude and delay to reward delivery in one form or another, but only a few encode them together at the single neuron level.
Collapse
Affiliation(s)
- Matthew R Roesch
- Department of Psychology and Program in Neuroscience and Cognitive Science, University of Maryland College Park, MD, USA
| | | |
Collapse
|
19
|
Penner MR, Mizumori SJY. Neural systems analysis of decision making during goal-directed navigation. Prog Neurobiol 2011; 96:96-135. [PMID: 21964237 DOI: 10.1016/j.pneurobio.2011.08.010] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2011] [Revised: 08/06/2011] [Accepted: 08/29/2011] [Indexed: 10/17/2022]
Abstract
The ability to make adaptive decisions during goal-directed navigation is a fundamental and highly evolved behavior that requires continual coordination of perceptions, learning and memory processes, and the planning of behaviors. Here, a neurobiological account for such coordination is provided by integrating current literatures on spatial context analysis and decision-making. This integration includes discussions of our current understanding of the role of the hippocampal system in experience-dependent navigation, how hippocampal information comes to impact midbrain and striatal decision making systems, and finally the role of the striatum in the implementation of behaviors based on recent decisions. These discussions extend across cellular to neural systems levels of analysis. Not only are key findings described, but also fundamental organizing principles within and across neural systems, as well as between neural systems functions and behavior, are emphasized. It is suggested that studying decision making during goal-directed navigation is a powerful model for studying interactive brain systems and their mediation of complex behaviors.
Collapse
Affiliation(s)
- Marsha R Penner
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, United States
| | | |
Collapse
|
20
|
The hippocampus: hub of brain network communication for memory. Trends Cogn Sci 2011; 15:310-8. [PMID: 21696996 DOI: 10.1016/j.tics.2011.05.008] [Citation(s) in RCA: 136] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2011] [Revised: 05/17/2011] [Accepted: 05/17/2011] [Indexed: 01/26/2023]
Abstract
A complex brain network, centered on the hippocampus, supports episodic memories throughout their lifetimes. Classically, upon memory encoding during active behavior, hippocampal activity is dominated by theta oscillations (6-10Hz). During inactivity, hippocampal neurons burst synchronously, constituting sharp waves, which can propagate to other structures, theoretically supporting memory consolidation. This 'two-stage' model has been updated by new data from high-density electrophysiological recordings in animals that shed light on how information is encoded and exchanged between hippocampus, neocortex and subcortical structures such as the striatum. Cell assemblies (tightly related groups of cells) discharge together and synchronize across brain structures orchestrated by theta, sharp waves and slow oscillations, to encode information. This evolving dynamical schema is key to extending our understanding of memory processes.
Collapse
|
21
|
van der Meer MAA, Redish AD. Ventral striatum: a critical look at models of learning and evaluation. Curr Opin Neurobiol 2011; 21:387-92. [PMID: 21420853 DOI: 10.1016/j.conb.2011.02.011] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2011] [Accepted: 02/25/2011] [Indexed: 10/18/2022]
Abstract
Extensive evidence implicates the ventral striatum in multiple distinct facets of action selection. Early work established a role in modulating ongoing behavior, as engaged by the energizing and directing influences of motivationally relevant cues and the willingness to expend effort in order to obtain reward. More recently, reinforcement learning models have suggested the notion of ventral striatum primarily as an evaluation step during learning, which serves as a critic to update a separate actor. Recent computational and experimental work may provide a resolution to the differences between these two theories through a careful parsing of behavior and the instrinsic heterogeneity that characterizes this complex structure.
Collapse
Affiliation(s)
- Matthijs A A van der Meer
- Department of Biology and Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | | |
Collapse
|
22
|
Tam DN. Computation in Emotional Processing: Quantitative Confirmation of Proportionality Hypothesis for Angry Unhappy Emotional Intensity to Perceived Loss. Cognit Comput 2011. [DOI: 10.1007/s12559-011-9095-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
23
|
van der Meer MAA, Redish AD. Theta phase precession in rat ventral striatum links place and reward information. J Neurosci 2011; 31:2843-54. [PMID: 21414906 PMCID: PMC3758553 DOI: 10.1523/jneurosci.4869-10.2011] [Citation(s) in RCA: 150] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2010] [Revised: 10/21/2010] [Accepted: 12/17/2010] [Indexed: 11/21/2022] Open
Abstract
A functional interaction between the hippocampal formation and the ventral striatum is thought to contribute to the learning and expression of associations between places and rewards. However, the mechanism of how such associations may be learned and used is currently unknown. We recorded neural ensembles and local field potentials from the ventral striatum and CA1 simultaneously as rats ran a modified T-maze. Theta-modulated cells in ventral striatum almost invariably showed firing phase precession relative to the hippocampal theta rhythm. Across the population of ventral striatal cells, phase precession was preferentially associated with an anticipatory ramping of activity up to the reward sites. In contrast, CA1 population activity and phase precession were distributed more uniformly. Ventral striatal phase precession was stronger to hippocampal than ventral striatal theta and was accompanied by increased theta coherence with hippocampus, suggesting that this effect is hippocampally derived. These results suggest that the firing phase of ventral striatal neurons contains motivationally relevant information and that phase precession serves to bind hippocampal place representations to ventral striatal representations of reward.
Collapse
|
24
|
Doñamayor N, Marco-Pallarés J, Heldmann M, Schoenfeld MA, Münte TF. Temporal dynamics of reward processing revealed by magnetoencephalography. Hum Brain Mapp 2011; 32:2228-40. [PMID: 21305665 DOI: 10.1002/hbm.21184] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2010] [Revised: 08/05/2010] [Accepted: 09/07/2010] [Indexed: 11/10/2022] Open
Abstract
Monetary gains and losses in gambling situations are associated with a distinct electroencephalographic signature: in the event-related potentials (ERPs), a mediofrontal feedback-related negativity (FRN) is seen for losses, whereas oscillatory activity shows a burst of in the θ-range for losses and in the β-range for gains. We used whole-head magnetoencephalography to pinpoint the magnetic counterparts of these effects in young healthy adults and explore their evolution over time. On each trial, participants bet on one of two visually presented numbers (25 or 5) by button-press. Both numbers changed color: if the chosen number turned green (red), it indicated a gain (loss) of the corresponding sum in Euro cent. For losses, we found the magnetic correlate of the FRN extending between 230 and 465 ms. Source localization with low-resolution electromagnetic tomography indicated a first generator in posterior cingulate cortex with subsequent activity in the anterior cingulate cortex. Importantly, this effect was sensitive to the magnitude of the monetary loss (25 cent > 5 cent). Later activation was also found in the right insula. Time-frequency analysis revealed a number of oscillatory components in the theta, alpha, and high-beta/low-gamma bands associated to gains, and in the high-beta band, associated to the magnitude of the loss. All together, these effects provide a more fine-grained picture of the temporal dynamics of the processing of monetary rewards and losses in the brain.
Collapse
Affiliation(s)
- Nuria Doñamayor
- Department of Neuropsychology, Otto-von-Guericke-Universität Magdeburg, Germany
| | | | | | | | | |
Collapse
|
25
|
van der Meer MAA, Kalenscher T, Lansink CS, Pennartz CMA, Berke JD, Redish AD. Integrating early results on ventral striatal gamma oscillations in the rat. Front Neurosci 2010; 4:300. [PMID: 21350600 PMCID: PMC3039412 DOI: 10.3389/fnins.2010.00300] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2010] [Accepted: 04/28/2010] [Indexed: 11/13/2022] Open
Abstract
A vast literature implicates the ventral striatum in the processing of reward-related information and in mediating the impact of such information on behavior. It is characterized by heterogeneity at the local circuit, connectivity, and functional levels. A tool for dissecting this complex structure that has received relatively little attention until recently is the analysis of ventral striatal local field potential oscillations, which are more prominent in the gamma band compared to the dorsal striatum. Here we review recent results on gamma oscillations recorded from freely moving rats. Ventral striatal gamma separates into distinct frequency bands (gamma-50 and gamma-80) with distinct behavioral correlates, relationships to different inputs, and separate populations of phase-locked putative fast-spiking interneurons. Fast switching between gamma-50 and gamma-80 occurs spontaneously but is influenced by reward delivery as well as the application of dopaminergic drugs. These results provide novel insights into ventral striatal processing and highlight the importance of considering fast-timescale dynamics of ventral striatal activity.
Collapse
|
26
|
van der Meer MAA, Redish AD. Expectancies in decision making, reinforcement learning, and ventral striatum. Front Neurosci 2010; 4:6. [PMID: 21221409 PMCID: PMC2891485 DOI: 10.3389/neuro.01.006.2010] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2009] [Accepted: 11/10/2009] [Indexed: 11/29/2022] Open
Abstract
Decisions can arise in different ways, such as from a gut feeling, doing what worked last time, or planful deliberation. Different decision-making systems are dissociable behaviorally, map onto distinct brain systems, and have different computational demands. For instance, “model-free” decision strategies use prediction errors to estimate scalar action values from previous experience, while “model-based” strategies leverage internal forward models to generate and evaluate potentially rich outcome expectancies. Animal learning studies indicate that expectancies may arise from different sources, including not only forward models but also Pavlovian associations, and the flexibility with which such representations impact behavior may depend on how they are generated. In the light of these considerations, we review the results of van der Meer and Redish (2009a), who found that ventral striatal neurons that respond to reward delivery can also be activated at other points, notably at a decision point where hippocampal forward representations were also observed. These data suggest the possibility that ventral striatal reward representations contribute to model-based expectancies used in deliberative decision making.
Collapse
|
27
|
Kolb EM, Kelly SA, Middleton KM, Sermsakdi LS, Chappell MA, Garland T. Erythropoietin elevates VO2,max but not voluntary wheel running in mice. ACTA ACUST UNITED AC 2010; 213:510-9. [PMID: 20086137 DOI: 10.1242/jeb.029074] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Voluntary activity is a complex trait, comprising both behavioral (motivation, reward) and anatomical/physiological (ability) elements. In the present study, oxygen transport was investigated as a possible limitation to further increases in running by four replicate lines of mice that have been selectively bred for high voluntary wheel running and have reached an apparent selection limit. To increase oxygen transport capacity, erythrocyte density was elevated by the administration of an erythropoietin (EPO) analogue. Mice were given two EPO injections, two days apart, at one of two dose levels (100 or 300 microg kg(-1)). Hemoglobin concentration ([Hb]), maximal aerobic capacity during forced treadmill exercise (VO2,max) and voluntary wheel running were measured. [Hb] did not differ between high runner (HR) and non-selected control (C) lines without EPO treatment. Both doses of EPO significantly (P<0.0001) increased [Hb] as compared with sham-injected animals, with no difference in [Hb] between the 100 microg kg(-1) and 300 microg kg(-1) dose levels (overall mean of 4.5 g dl(-1) increase). EPO treatment significantly increased VO2,max by approximately 5% in both the HR and C lines, with no dosexline type interaction. However, wheel running (revolutions per day) did not increase with EPO treatment in either the HR or C lines, and in fact significantly decreased at the higher dose in both line types. These results suggest that neither [Hb] per se nor VO2,max is limiting voluntary wheel running in the HR lines. Moreover, we hypothesize that the decrease in wheel running at the higher dose of EPO may reflect direct action on the reward pathway of the brain.
Collapse
Affiliation(s)
- E M Kolb
- Department of Biology, University of California, Riverside, CA 92521, USA
| | | | | | | | | | | |
Collapse
|
28
|
Kalenscher T, Lansink CS, Lankelma JV, Pennartz CMA. Reward-associated gamma oscillations in ventral striatum are regionally differentiated and modulate local firing activity. J Neurophysiol 2010; 103:1658-72. [PMID: 20089824 DOI: 10.1152/jn.00432.2009] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Oscillations of local field potentials (LFPs) in the gamma range are found in many brain regions and are supposed to support the temporal organization of cognitive, perceptual, and motor functions. Even though gamma oscillations have also been observed in ventral striatum, one of the brain's most important structures for motivated behavior and reward processing, their specific function during ongoing behavior is unknown. Using a movable tetrode array, we recorded LFPs and activity of neural ensembles in the ventral striatum of rats performing a reward-collection task. Rats were running along a triangle track and in each round collected one of three different types of rewards. The gamma power of LFPs on subsets of tetrodes was modulated by reward-site visits, discriminated between reward types, between baitedness of reward locations and was different before versus after arrival at a reward site. Many single units in ventral striatum phase-locked their discharge pattern to the gamma oscillations of the LFPs. Phase-locking occurred more often in reward-related than in reward-unrelated neurons and LFPs. A substantial number of simultaneously recorded LFPs correlated poorly with each other in terms of gamma rhythmicity, indicating that the expression of gamma activity was heterogeneous and regionally differentiated. The orchestration of LFPs and single-unit activity by way of gamma rhythmicity sheds light on the functional architecture of the ventral striatum and the temporal coordination of ventral striatal activity for modulating downstream areas and regulating synaptic plasticity.
Collapse
Affiliation(s)
- Tobias Kalenscher
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
29
|
Humphries MD, Prescott TJ. The ventral basal ganglia, a selection mechanism at the crossroads of space, strategy, and reward. Prog Neurobiol 2009; 90:385-417. [PMID: 19941931 DOI: 10.1016/j.pneurobio.2009.11.003] [Citation(s) in RCA: 256] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2009] [Revised: 11/12/2009] [Accepted: 11/16/2009] [Indexed: 11/27/2022]
Abstract
The basal ganglia are often conceptualised as three parallel domains that include all the constituent nuclei. The 'ventral domain' appears to be critical for learning flexible behaviours for exploration and foraging, as it is the recipient of converging inputs from amygdala, hippocampal formation and prefrontal cortex, putatively centres for stimulus evaluation, spatial navigation, and planning/contingency, respectively. However, compared to work on the dorsal domains, the rich potential for quantitative theories and models of the ventral domain remains largely untapped, and the purpose of this review is to provide the stimulus for this work. We systematically review the ventral domain's structures and internal organisation, and propose a functional architecture as the basis for computational models. Using a full schematic of the structure of inputs to the ventral striatum (nucleus accumbens core and shell), we argue for the existence of many identifiable processing channels on the basis of unique combinations of afferent inputs. We then identify the potential information represented in these channels by reconciling a broad range of studies from the hippocampal, amygdala and prefrontal cortex literatures with known properties of the ventral striatum from lesion, pharmacological, and electrophysiological studies. Dopamine's key role in learning is reviewed within the three current major computational frameworks; we also show that the shell-based basal ganglia sub-circuits are well placed to generate the phasic burst and dip responses of dopaminergic neurons. We detail dopamine's modulation of ventral basal ganglia's inputs by its actions on pre-synaptic terminals and post-synaptic membranes in the striatum, arguing that the complexity of these effects hint at computational roles for dopamine beyond current ideas. The ventral basal ganglia are revealed as a constellation of multiple functional systems for the learning and selection of flexible behaviours and of behavioural strategies, sharing the common operations of selection-by-disinhibition and of dopaminergic modulation.
Collapse
Affiliation(s)
- Mark D Humphries
- Adaptive Behaviour Research Group, Department of Psychology, University of Sheffield, S10 2TN, UK.
| | | |
Collapse
|