151
|
Rabinovich MI, Afraimovich VS, Bick C, Varona P. Information flow dynamics in the brain. Phys Life Rev 2012; 9:51-73. [DOI: 10.1016/j.plrev.2011.11.002] [Citation(s) in RCA: 85] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2011] [Accepted: 11/15/2011] [Indexed: 11/26/2022]
|
152
|
Chen Z, Kloosterman F, Brown EN, Wilson MA. Uncovering spatial topology represented by rat hippocampal population neuronal codes. J Comput Neurosci 2012; 33:227-55. [PMID: 22307459 DOI: 10.1007/s10827-012-0384-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2011] [Revised: 01/16/2012] [Accepted: 01/23/2012] [Indexed: 10/14/2022]
Abstract
Hippocampal population codes play an important role in representation of spatial environment and spatial navigation. Uncovering the internal representation of hippocampal population codes will help understand neural mechanisms of the hippocampus. For instance, uncovering the patterns represented by rat hippocampus (CA1) pyramidal cells during periods of either navigation or sleep has been an active research topic over the past decades. However, previous approaches to analyze or decode firing patterns of population neurons all assume the knowledge of the place fields, which are estimated from training data a priori. The question still remains unclear how can we extract information from population neuronal responses either without a priori knowledge or in the presence of finite sampling constraint. Finding the answer to this question would leverage our ability to examine the population neuronal codes under different experimental conditions. Using rat hippocampus as a model system, we attempt to uncover the hidden "spatial topology" represented by the hippocampal population codes. We develop a hidden Markov model (HMM) and a variational Bayesian (VB) inference algorithm to achieve this computational goal, and we apply the analysis to extensive simulation and experimental data. Our empirical results show promising direction for discovering structural patterns of ensemble spike activity during periods of active navigation. This study would also provide useful insights for future exploratory data analysis of population neuronal codes during periods of sleep.
Collapse
Affiliation(s)
- Zhe Chen
- Neuroscience Statistics Research Lab, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA.
| | | | | | | |
Collapse
|
153
|
Nekorkin VI, Dmitrichev AS, Kasatkin DV, Afraimovich VS. Relating the sequential dynamics of excitatory neural networks to synaptic cellular automata. CHAOS (WOODBURY, N.Y.) 2011; 21:043124. [PMID: 22225361 DOI: 10.1063/1.3657384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We have developed a new approach for the description of sequential dynamics of excitatory neural networks. Our approach is based on the dynamics of synapses possessing the short-term plasticity property. We suggest a model of such synapses in the form of a second-order system of nonlinear ODEs. In the framework of the model two types of responses are realized-the fast and the slow ones. Under some relations between their timescales a cellular automaton (CA) on the graph of connections is constructed. Such a CA has only a finite number of attractors and all of them are periodic orbits. The attractors of the CA determine the regimes of sequential dynamics of the original neural network, i.e., itineraries along the network and the times of successive firing of neurons in the form of bunches of spikes. We illustrate our approach on the example of a Morris-Lecar neural network.
Collapse
Affiliation(s)
- V I Nekorkin
- Institute of Applied Physics of RAS, 46 Ul'yanov Street, 603950, Nizhny Novgorod, Russia
| | | | | | | |
Collapse
|
154
|
Long JD, Carmena JM. A statistical description of neural ensemble dynamics. Front Comput Neurosci 2011; 5:52. [PMID: 22319486 PMCID: PMC3226070 DOI: 10.3389/fncom.2011.00052] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2011] [Accepted: 11/01/2011] [Indexed: 12/04/2022] Open
Abstract
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility.
Collapse
Affiliation(s)
- John D. Long
- Helen Wills Neuroscience Institute, University of CaliforniaBerkeley, CA, USA
| | - Jose M. Carmena
- Helen Wills Neuroscience Institute, University of CaliforniaBerkeley, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of CaliforniaBerkeley, CA, USA
- Program in Cognitive Science, University of CaliforniaBerkeley, CA, USA
| |
Collapse
|
155
|
Abstract
In this issue, Doucette and colleagues demonstrate that information related to whether an odor is currently linked to reward can be observed uniquely in population activity in the olfactory bulb, changing our understanding both of what is coded by the first olfactory relay in the CNS and of how this coding is instantiated.
Collapse
Affiliation(s)
- Donald B Katz
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02454, USA.
| | | |
Collapse
|
156
|
Rabinovich MI, Varona P. Robust transient dynamics and brain functions. Front Comput Neurosci 2011; 5:24. [PMID: 21716642 PMCID: PMC3116137 DOI: 10.3389/fncom.2011.00024] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2010] [Accepted: 05/09/2011] [Indexed: 11/13/2022] Open
Abstract
In the last few decades several concepts of dynamical systems theory (DST) have guided psychologists, cognitive scientists, and neuroscientists to rethink about sensory motor behavior and embodied cognition. A critical step in the progress of DST application to the brain (supported by modern methods of brain imaging and multi-electrode recording techniques) has been the transfer of its initial success in motor behavior to mental function, i.e., perception, emotion, and cognition. Open questions from research in genetics, ecology, brain sciences, etc., have changed DST itself and lead to the discovery of a new dynamical phenomenon, i.e., reproducible and robust transients that are at the same time sensitive to informational signals. The goal of this review is to describe a new mathematical framework - heteroclinic sequential dynamics - to understand self-organized activity in the brain that can explain certain aspects of robust itinerant behavior. Specifically, we discuss a hierarchy of coarse-grain models of mental dynamics in the form of kinetic equations of modes. These modes compete for resources at three levels: (i) within the same modality, (ii) among different modalities from the same family (like perception), and (iii) among modalities from different families (like emotion and cognition). The analysis of the conditions for robustness, i.e., the structural stability of transient (sequential) dynamics, give us the possibility to explain phenomena like the finite capacity of our sequential working memory - a vital cognitive function -, and to find specific dynamical signatures - different kinds of instabilities - of several brain functions and mental diseases.
Collapse
|
157
|
Xydas D, Downes JH, Spencer MC, Hammond MW, Nasuto SJ, Whalley BJ, Becerra VM, Warwick K. Revealing ensemble state transition patterns in multi-electrode neuronal recordings using hidden Markov models. IEEE Trans Neural Syst Rehabil Eng 2011; 19:345-55. [PMID: 21622081 DOI: 10.1109/tnsre.2011.2157360] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In order to harness the computational capacity of dissociated cultured neuronal networks, it is necessary to understand neuronal dynamics and connectivity on a mesoscopic scale. To this end, this paper uncovers dynamic spatiotemporal patterns emerging from electrically stimulated neuronal cultures using hidden Markov models (HMMs) to characterize multi-channel spike trains as a progression of patterns of underlying states of neuronal activity. However, experimentation aimed at optimal choice of parameters for such models is essential and results are reported in detail. Results derived from ensemble neuronal data revealed highly repeatable patterns of state transitions in the order of milliseconds in response to probing stimuli.
Collapse
Affiliation(s)
- Dimitris Xydas
- Cybernetics Research Group, School of Systems Engineering, University of Reading, RG6 6AY Reading, UK.
| | | | | | | | | | | | | | | |
Collapse
|
158
|
Balaguer-Ballester E, Lapish CC, Seamans JK, Durstewitz D. Attracting dynamics of frontal cortex ensembles during memory-guided decision-making. PLoS Comput Biol 2011; 7:e1002057. [PMID: 21625577 PMCID: PMC3098221 DOI: 10.1371/journal.pcbi.1002057] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2010] [Accepted: 03/31/2011] [Indexed: 11/18/2022] Open
Abstract
A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states.
Collapse
Affiliation(s)
- Emili Balaguer-Ballester
- Bernstein-Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Christopher C. Lapish
- Department of Psychology, Indiana University-Purdue University, Indianapolis, Indiana, United States of America
| | - Jeremy K. Seamans
- Brain Research Center & Department of Psychiatry, University of British Columbia, Vancouver, Canada
| | - Daniel Durstewitz
- Bernstein-Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
159
|
Stone ME, Maffei A, Fontanini A. Amygdala stimulation evokes time-varying synaptic responses in the gustatory cortex of anesthetized rats. Front Integr Neurosci 2011; 5:3. [PMID: 21503144 PMCID: PMC3071977 DOI: 10.3389/fnint.2011.00003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2011] [Accepted: 03/17/2011] [Indexed: 11/13/2022] Open
Abstract
Gustatory stimuli are characterized by a specific hedonic value; they are either palatable or aversive. Hedonic value, along with other psychological dimensions of tastes, is coded in the time-course of gustatory cortex (GC) neural responses and appears to emerge via top-down modulation by the basolateral amygdala (BLA). While the importance of BLA in modulating gustatory cortical function has been well established, the nature of its input onto GC neurons is largely unknown. Somewhat conflicting results from extracellular recordings point to either excitatory or inhibitory effects. Here, we directly test the hypothesis that BLA can evoke time-varying - excitatory and inhibitory - synaptic responses in GC using in vivo intracellular recording techniques in urethane anesthetized rats. Electrical stimulation of BLA evoked a post-synaptic potential (PSP) in GC neurons that resulted from a combination of short and long latency components: an initial monosynaptic, glutamatergic potential followed by a multisynaptic, GABAergic hyperpolarization. As predicted by the dynamic nature of amygdala evoked potentials, trains of five BLA stimuli at rates that mimic physiological firing rates (5-40 Hz) evoke a combination of excitation and inhibition in GC cells. The magnitude of the different components varies depending on the frequency of stimulation, with summation of excitatory and inhibitory inputs reaching its maximum at higher frequencies. These experiments provide the first description of BLA synaptic inputs to GC and reveal that amygdalar afferents can modulate gustatory cortical network activity and its processing of sensory information via time-varying synaptic dynamics.
Collapse
Affiliation(s)
- Martha E Stone
- Department of Neurobiology and Behavior, Stony Brook University Stony Brook, NY, USA
| | | | | |
Collapse
|
160
|
Yoshida T, Katz DB. Control of prestimulus activity related to improved sensory coding within a discrimination task. J Neurosci 2011; 31:4101-12. [PMID: 21411651 PMCID: PMC3089821 DOI: 10.1523/jneurosci.4380-10.2011] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2010] [Revised: 01/04/2011] [Accepted: 01/13/2011] [Indexed: 11/21/2022] Open
Abstract
Network state influences the processing of incoming stimuli. It is reasonable to expect, therefore, that animals might adjust cortical activity to improve sensory coding of behaviorally relevant stimuli. We tested this hypothesis, recording single-neuron activity from gustatory cortex (GC) in rats engaged in a two-alternative forced-choice taste discrimination task, and assaying the responses of these same neurons when the rats received the stimuli passively. We found that the task context affected the GC network state (reducing beta- and gamma-band field potential activity) and changed prestimulus and taste-induced single-neuron activity: before the stimulus, the activity of already low-firing neurons was further reduced, a change that was followed by comparable reductions of taste responses themselves. These changes served to sharpen taste selectivity, mainly by reducing responses to suboptimal stimuli. This sharpening of taste selectivity was specifically attributable to neurons with decreased prestimulus activities. Our results suggest the importance of prestimulus activity control for improving sensory coding within the task context.
Collapse
Affiliation(s)
- Takashi Yoshida
- Department of Psychology
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| | - Donald B. Katz
- Department of Psychology
- Program of Neuroscience, and
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| |
Collapse
|
161
|
Mishchenko Y. Reconstruction of complete connectivity matrix for connectomics by sampling neural connectivity with fluorescent synaptic markers. J Neurosci Methods 2011; 196:289-302. [DOI: 10.1016/j.jneumeth.2011.01.021] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2010] [Revised: 01/18/2011] [Accepted: 01/19/2011] [Indexed: 10/18/2022]
|
162
|
Rosen AM, Victor JD, Di Lorenzo PM. Temporal coding of taste in the parabrachial nucleus of the pons of the rat. J Neurophysiol 2011; 105:1889-96. [PMID: 21307316 DOI: 10.1152/jn.00836.2010] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies have provided evidence that temporal coding contributes significantly to encoding taste stimuli at the first central relay for taste, the nucleus of the solitary tract (NTS). However, it is not known whether this coding mechanism is also used at the next synapse in the central taste pathway, the parabrachial nucleus of the pons (PbN). In the present study, electrophysiological responses to taste stimuli (sucrose, NaCl, HCl, and quinine) were recorded from 44 cells in the PbN of anesthetized rats. In 29 cells, the contribution of the temporal characteristics of the response to the discrimination of various taste qualities was assessed. A family of metrics that quantifies the similarity of two spike trains in terms of spike count and spike timing was used. Results showed that spike timing in 14 PbN cells (48%) conveyed a significant amount of information about taste quality, beyond what could be conveyed by spike count alone. In another 14 cells (48%), the rate envelope (time course) of the response contributed significantly more information than spike count alone. Across cells there was a significant correlation (r = 0.51; P < 0.01) between breadth of tuning and the proportion of information conveyed by temporal dynamics. Comparison with previous data from the NTS (Di Lorenzo PM and Victor JD. J Neurophysiol 90: 1418-31, 2003 and J Neurophysiol 97: 1857-1861, 2007) showed that temporal coding in the NTS occurred in a similar proportion of cells and contributed a similar fraction of the total information at the same average level of temporal precision, even though trial-to-trial variability was higher in the PbN than in the NTS. These data suggest that information about taste quality conveyed by the temporal characteristics of evoked responses is transmitted with high fidelity from the NTS to the PbN.
Collapse
Affiliation(s)
- Andrew M Rosen
- Department of Psychology, Binghamton University, Binghamton, NY 13902-6000, USA
| | | | | |
Collapse
|
163
|
Escola S, Fontanini A, Katz D, Paninski L. Hidden Markov models for the stimulus-response relationships of multistate neural systems. Neural Comput 2011; 23:1071-132. [PMID: 21299424 DOI: 10.1162/neco_a_00118] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Given recent experimental results suggesting that neural circuits may evolve through multiple firing states, we develop a framework for estimating state-dependent neural response properties from spike train data. We modify the traditional hidden Markov model (HMM) framework to incorporate stimulus-driven, non-Poisson point-process observations. For maximal flexibility, we allow external, time-varying stimuli and the neurons' own spike histories to drive both the spiking behavior in each state and the transitioning behavior between states. We employ an appropriately modified expectation-maximization algorithm to estimate the model parameters. The expectation step is solved by the standard forward-backward algorithm for HMMs. The maximization step reduces to a set of separable concave optimization problems if the model is restricted slightly. We first test our algorithm on simulated data and are able to fully recover the parameters used to generate the data and accurately recapitulate the sequence of hidden states. We then apply our algorithm to a recently published data set in which the observed neuronal ensembles displayed multistate behavior and show that inclusion of spike history information significantly improves the fit of the model. Additionally, we show that a simple reformulation of the state space of the underlying Markov chain allows us to implement a hybrid half-multistate, half-histogram model that may be more appropriate for capturing the complexity of certain data sets than either a simple HMM or a simple peristimulus time histogram model alone.
Collapse
Affiliation(s)
- Sean Escola
- Center for Theoretical Neuroscience and Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| | | | | | | |
Collapse
|
164
|
Takiyama K, Okada M. Detection of hidden structures in nonstationary spike trains. Neural Comput 2011; 23:1205-33. [PMID: 21299427 DOI: 10.1162/neco_a_00109] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We propose an algorithm for simultaneously estimating state transitions among neural states and nonstationary firing rates using a switching state-space model (SSSM). This algorithm enables us to detect state transitions on the basis of not only discontinuous changes in mean firing rates but also discontinuous changes in the temporal profiles of firing rates (e.g., temporal correlation). We construct estimation and learning algorithms for a nongaussian SSSM, whose nongaussian property is caused by binary spike events. Local variational methods can transform the binary observation process into a quadratic form. The transformed observation process enables us to construct a variational Bayes algorithm that can determine the number of neural states based on automatic relevance determination. Additionally, our algorithm can estimate model parameters from single-trial data using a priori knowledge about state transitions and firing rates. Synthetic data analysis reveals that our algorithm has higher performance for estimating nonstationary firing rates than previous methods. The analysis also confirms that our algorithm can detect state transitions on the basis of discontinuous changes in temporal correlation, which are transitions that previous hidden Markov models could not detect. We also analyze neural data recorded from the medial temporal area. The statistically detected neural states probably coincide with transient and sustained states that have been detected heuristically. Estimated parameters suggest that our algorithm detects the state transitions on the basis of discontinuous changes in the temporal correlation of firing rates. These results suggest that our algorithm is advantageous in real-data analysis.
Collapse
Affiliation(s)
- Ken Takiyama
- The University of Tokyo, Kashiwanoha 5-1-5, Kashiwa-shi, Chiba 277-8561, Japan.
| | | |
Collapse
|
165
|
Afraimovich V, Young T, Muezzinoglu MK, Rabinovich MI. Nonlinear dynamics of emotion-cognition interaction: when emotion does not destroy cognition? Bull Math Biol 2011; 73:266-84. [PMID: 20821062 PMCID: PMC3208426 DOI: 10.1007/s11538-010-9572-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2010] [Accepted: 07/05/2010] [Indexed: 11/26/2022]
Abstract
Emotion (i.e., spontaneous motivation and subsequent implementation of a behavior) and cognition (i.e., problem solving by information processing) are essential to how we, as humans, respond to changes in our environment. Recent studies in cognitive science suggest that emotion and cognition are subserved by different, although heavily integrated, neural systems. Understanding the time-varying relationship of emotion and cognition is a challenging goal with important implications for neuroscience. We formulate here the dynamical model of emotion-cognition interaction that is based on the following principles: (1) the temporal evolution of cognitive and emotion modes are captured by the incoming stimuli and competition within and among themselves (competition principle); (2) metastable states exist in the unified emotion-cognition phase space; and (3) the brain processes information with robust and reproducible transients through the sequence of metastable states. Such a model can take advantage of the often ignored temporal structure of the emotion-cognition interaction to provide a robust and generalizable method for understanding the relationship between brain activation and complex human behavior. The mathematical image of the robust and reproducible transient dynamics is a Stable Heteroclinic Sequence (SHS), and the Stable Heteroclinic Channels (SHCs). These have been hypothesized to be possible mechanisms that lead to the sequential transient behavior observed in networks. We investigate the modularity of SHCs, i.e., given a SHS and a SHC that is supported in one part of a network, we study conditions under which the SHC pertaining to the cognition will continue to function in the presence of interfering activity with other parts of the network, i.e., emotion.
Collapse
Affiliation(s)
| | - Todd Young
- Department of Mathematics, Ohio University, Athens, OH, USA,
| | | | | |
Collapse
|
166
|
de Franciscis S, Torres JJ, Marro J. Unstable dynamics, nonequilibrium phases, and criticality in networked excitable media. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 82:041105. [PMID: 21230236 DOI: 10.1103/physreve.82.041105] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2010] [Indexed: 05/30/2023]
Abstract
Excitable systems are of great theoretical and practical interest in mathematics, physics, chemistry, and biology. Here, we numerically study models of excitable media, namely, networks whose nodes may occasionally be dormant and the connection weights are allowed to vary with the system activity on a short-time scale, which is a convenient and realistic representation. The resulting global activity is quite sensitive to stimuli and eventually becomes unstable also in the absence of any stimuli. Outstanding consequences of such unstable dynamics are the spontaneous occurrence of various nonequilibrium phases--including associative-memory phases and one in which the global activity wanders irregularly, e.g., chaotically among all or part of the dynamic attractors--and 1/f noise as the system is driven into the phase region corresponding to the most irregular behavior. A net result is resilience which results in an efficient search in the model attractor space that can explain the origin of some observed behavior in neural, genetic, and ill-condensed matter systems. By extensive computer simulation we also address a previously conjectured relation between observed power-law distributions and the possible occurrence of a "critical state" during functionality of, e.g., cortical networks, and describe the precise nature of such criticality in the model which may serve to guide future experiments.
Collapse
Affiliation(s)
- S de Franciscis
- Departmento de Electromagnetismo y Física de la Materia, Institute Carlos I for Theoretical and Computational Physics, University of Granada, Granada, Spain
| | | | | |
Collapse
|
167
|
Klampfl S, Maass W. A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Comput 2010; 22:2979-3035. [PMID: 20858129 DOI: 10.1162/neco_a_00050] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | |
Collapse
|
168
|
Dynamical principles of emotion-cognition interaction: mathematical images of mental disorders. PLoS One 2010; 5:e12547. [PMID: 20877723 PMCID: PMC2943469 DOI: 10.1371/journal.pone.0012547] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 08/11/2010] [Indexed: 01/08/2023] Open
Abstract
The key contribution of this work is to introduce a mathematical framework to understand self-organized dynamics in the brain that can explain certain aspects of itinerant behavior. Specifically, we introduce a model based upon the coupling of generalized Lotka-Volterra systems. This coupling is based upon competition for common resources. The system can be regarded as a normal or canonical form for any distributed system that shows self-organized dynamics that entail winnerless competition. Crucially, we will show that some of the fundamental instabilities that arise in these coupled systems are remarkably similar to endogenous activity seen in the brain (using EEG and fMRI). Furthermore, by changing a small subset of the system's parameters we can produce bifurcations and metastable sequential dynamics changing, which bear a remarkable similarity to pathological brain states seen in psychiatry. In what follows, we will consider the coupling of two macroscopic modes of brain activity, which, in a purely descriptive fashion, we will label as cognitive and emotional modes. Our aim is to examine the dynamical structures that emerge when coupling these two modes and relate them tentatively to brain activity in normal and non-normal states.
Collapse
|
169
|
Sensory input drives multiple intracellular information streams in somatosensory cortex. J Neurosci 2010; 30:10872-84. [PMID: 20702716 DOI: 10.1523/jneurosci.6174-09.2010] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Stable perception arises from the interaction between sensory inputs and internal activity fluctuations in cortex. Here we analyzed how different types of activity contribute to cortical sensory processing at the cellular scale. We performed whole-cell recordings in the barrel cortex of anesthetized rats while applying ongoing whisker stimulation and measured the information conveyed about the time-varying stimulus by different types of input (membrane potential) and output (spiking) signals. We found that substantial, comparable amounts of incoming information are carried by two types of membrane potential signal: slow, large (up-down state) fluctuations, and faster (>20 Hz), smaller-amplitude synaptic activity. Both types of activity fluctuation are therefore significantly driven by the stimulus on an ongoing basis. Each stream conveys essentially independent information. Output (spiking) information is contained in spike timing not just relative to the stimulus but also relative to membrane potential fluctuations. Information transfer is favored in up states relative to down states. Thus, slow, ongoing activity fluctuations and finer-scale synaptic activity generate multiple channels for incoming and outgoing information within barrel cortex neurons during ongoing stimulation.
Collapse
|
170
|
Katahira K, Nishikawa J, Okanoya K, Okada M. Extracting state transition dynamics from multiple spike trains using hidden Markov models with correlated poisson distribution. Neural Comput 2010; 22:2369-89. [PMID: 20337539 DOI: 10.1162/neco.2010.08-08-838] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neural activity is nonstationary and varies across time. Hidden Markov models (HMMs) have been used to track the state transition among quasi-stationary discrete neural states. Within this context, an independent Poisson model has been used for the output distribution of HMMs; hence, the model is incapable of tracking the change in correlation without modulating the firing rate. To achieve this, we applied a multivariate Poisson distribution with correlation terms for the output distribution of HMMs. We formulated a variational Bayes (VB) inference for the model. The VB could automatically determine the appropriate number of hidden states and correlation types while avoiding the overlearning problem. We developed an efficient algorithm for computing posteriors using the recursive relationship of a multivariate Poisson distribution. We demonstrated the performance of our method on synthetic data and real spike trains recorded from a songbird.
Collapse
Affiliation(s)
- Kentaro Katahira
- Graduate School of Frontier Sciences, University of Tokyo, 277-8561 Chiba, Japan.
| | | | | | | |
Collapse
|
171
|
Paninski L, Ahmadian Y, Ferreira DG, Koyama S, Rahnama Rad K, Vidne M, Vogelstein J, Wu W. A new look at state-space models for neural data. J Comput Neurosci 2010; 29:107-126. [PMID: 19649698 PMCID: PMC3712521 DOI: 10.1007/s10827-009-0179-x] [Citation(s) in RCA: 111] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2008] [Revised: 07/06/2009] [Accepted: 07/16/2009] [Indexed: 10/20/2022]
Abstract
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.
Collapse
Affiliation(s)
- Liam Paninski
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Yashar Ahmadian
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Daniel Gil Ferreira
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Shinsuke Koyama
- Department of Statistics, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Kamiar Rahnama Rad
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Michael Vidne
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Joshua Vogelstein
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Wei Wu
- Department of Statistics, Florida State University, Tallahassee, FL, USA
| |
Collapse
|
172
|
Carleton A, Accolla R, Simon SA. Coding in the mammalian gustatory system. Trends Neurosci 2010; 33:326-34. [PMID: 20493563 PMCID: PMC2902637 DOI: 10.1016/j.tins.2010.04.002] [Citation(s) in RCA: 127] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2007] [Revised: 03/29/2010] [Accepted: 04/13/2010] [Indexed: 01/17/2023]
Abstract
To understand gustatory physiology and associated dysfunctions it is important to know how oral taste stimuli are encoded both in the periphery and in taste-related brain centres. The identification of distinct taste receptors, together with electrophysiological recordings and behavioral assessments in response to taste stimuli, suggest that information about distinct taste modalities (e.g. sweet versus bitter) are transmitted from the periphery to the brain via segregated pathways. By contrast, gustatory neurons throughout the brain are more broadly tuned, indicating that ensembles of neurons encode taste qualities. Recent evidence reviewed here suggests that the coding of gustatory stimuli is not immutable, but is dependant on a variety of factors including appetite-regulating molecules and associative learning.
Collapse
Affiliation(s)
- Alan Carleton
- Department of Neurosciences, Medical Faculty, University of Geneva, 1 rue Michel-Servet, 1211 Genève 4, Switzerland.
| | | | | |
Collapse
|
173
|
Santos GS, Gireesh ED, Plenz D, Nakahara H. Hierarchical interaction structure of neural activities in cortical slice cultures. J Neurosci 2010; 30:8720-33. [PMID: 20592194 PMCID: PMC3042275 DOI: 10.1523/jneurosci.6141-09.2010] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 02/16/2010] [Accepted: 05/10/2010] [Indexed: 11/21/2022] Open
Abstract
Recent advances in the analysis of neuronal activities suggest that the instantaneous activity patterns can be mostly explained by considering only first-order and pairwise interactions between recorded elements, i.e., action potentials or local field potentials (LFP), and do not require higher-than-pairwise-order interactions. If generally applicable, this pairwise approach greatly simplifies the description of network interactions. However, an important question remains: are the recorded elements the units of interaction that best describe neuronal activity patterns? To explore this, we recorded spontaneous LFP peak activities in cortical organotypic cultures using planar, integrated 60-microelectrode arrays. We compared predictions obtained using a pairwise approach with those using a hierarchical approach that uses two different spatial units for describing the activity interactions: single electrodes and electrode clusters. In this hierarchical model, short-range interactions within each cluster were modeled by pairwise interactions of electrode activities and long-range interactions were modeled by pairwise interactions of cluster activities. Despite the relatively low number of parameters used, the hierarchical model provided a more accurate description of the activity patterns than the pairwise model when applied to ensembles of 10 electrodes. Furthermore, the hierarchical model was successfully applied to a larger-scale data of approximately 60 electrodes. Electrode activities within clusters were highly correlated and spatially contiguous. In contrast, long-range interactions were diffuse, suggesting the presence of higher-than-pairwise-order interactions involved in the LFP peak activities. Thus, the identification of appropriate units of interaction may allow for the successful characterization of neuronal activities in large-scale networks.
Collapse
Affiliation(s)
- Gustavo S. Santos
- Laboratory for Integrated Theoretical Neuroscience, RIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan
| | - Elakkat D. Gireesh
- Section on Critical Brain Dynamics, Laboratory of Systems Neuroscience, National Institute of Mental Health, Bethesda, Maryland 20892, and
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, Laboratory of Systems Neuroscience, National Institute of Mental Health, Bethesda, Maryland 20892, and
| | - Hiroyuki Nakahara
- Laboratory for Integrated Theoretical Neuroscience, RIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan
- Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8501, Japan
| |
Collapse
|
174
|
Durstewitz D, Vittoz NM, Floresco SB, Seamans JK. Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning. Neuron 2010; 66:438-48. [PMID: 20471356 DOI: 10.1016/j.neuron.2010.03.029] [Citation(s) in RCA: 246] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2010] [Indexed: 11/28/2022]
Abstract
One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight.
Collapse
Affiliation(s)
- Daniel Durstewitz
- RG Computational Neuroscience, Central Institute of Mental Health and Interdisciplinary Center for Neurosciences, University of Heidelberg, J 5, 68159 Mannheim, Germany.
| | | | | | | |
Collapse
|
175
|
Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum. J Neurosci 2010; 30:5894-911. [PMID: 20427650 DOI: 10.1523/jneurosci.5540-09.2010] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Collapse
|
176
|
Daelli V, Treves A. Neural attractor dynamics in object recognition. Exp Brain Res 2010; 203:241-8. [PMID: 20437171 DOI: 10.1007/s00221-010-2243-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2009] [Accepted: 03/31/2010] [Indexed: 12/30/2022]
Abstract
A widely held theory dating back to Donald Hebb posits neuronal attractor dynamics to underlie the retrieval of objects from long-term memory and the categorization of ambiguous stimuli, but empirical support for this notion had so far pointed more at self-sustained activity than at attractor dynamics per se. Complex perceptual effects modulating memory retrieval, including priming effects, are compatible with both attractor dynamics and alternative hypotheses, which seem to result in opposite predictions at the neuronal level. Recent recordings in monkeys indicate that attractor dynamics may indeed be observed, as it unfolds in time over a few hundred milliseconds, if neurons are probed in infero-temporal cortex during the categorization of ambiguous visual stimuli. Extending the analysis of such phenomena promises to take us beyond the perceptual periphery, where neuronal responses are still largely determined by sensory stimuli. Understanding the nature of transitions between attractor states opens the door to higher-level thought processes.
Collapse
|
177
|
Tort ABL, Fontanini A, Kramer MA, Jones-Lush LM, Kopell NJ, Katz DB. Cortical networks produce three distinct 7-12 Hz rhythms during single sensory responses in the awake rat. J Neurosci 2010; 30:4315-24. [PMID: 20335467 PMCID: PMC3318968 DOI: 10.1523/jneurosci.6051-09.2010] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2009] [Revised: 01/20/2010] [Accepted: 02/15/2010] [Indexed: 11/21/2022] Open
Abstract
Cortical rhythms in the alpha/mu frequency range (7-12 Hz) have been variously related to "idling," anticipation, seizure, and short-term or working memory. This overabundance of interpretations suggests that sensory cortex may be able to produce more than one (and even more than two) distinct alpha/mu rhythms. Here we describe simultaneous local field potential and single-neuron recordings made from primary sensory (gustatory) cortex of awake rats and reveal three distinct 7-12 Hz de novo network rhythms within single sessions: an "early," taste-induced approximately 11 Hz rhythm, the first peak of which was a short-latency gustatory evoked potential; a "late," significantly lower-frequency (approximately 7 Hz) rhythm that replaced this first rhythm at approximately 750-850 ms after stimulus onset (consistently timed with a previously described shift in taste temporal codes); and a "spontaneous" spike-and-wave rhythm of intermediate peak frequency (approximately 9 Hz) that appeared late in the session, as part of a oft-described reduction in arousal/attention. These rhythms proved dissociable on many grounds: in addition to having different peak frequencies, amplitudes, and shapes and appearing at different time points (although often within single 3 s snippets of activity), the early and late rhythms proved to have completely uncorrelated session-to-session variability, and the spontaneous rhythm affected the early rhythm only (having no impact on the late rhythm). Analysis of spike-to-wave coupling suggested that the early and late rhythms are a unified part of discriminative taste process: the identity of phase-coupled single-neuron ensembles differed from taste to taste, and coupling typically lasted across the change in frequency. These data reveal that even rhythms confined to a narrow frequency band may still have distinct properties.
Collapse
Affiliation(s)
- Adriano B. L. Tort
- Edmond and Lily Safra International Institute of Neuroscience of Natal and
- Federal University of Rio Grande do Norte, Natal, RN 59066, Brazil
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, New York 11794
| | - Mark A. Kramer
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Lauren M. Jones-Lush
- Department of Physical Therapy and Rehabilitation Science/Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland 21201, and
| | - Nancy J. Kopell
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Donald B. Katz
- Department of Psychology/Program of Neuroscience/Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| |
Collapse
|
178
|
Miller P, Katz DB. Stochastic transitions between neural states in taste processing and decision-making. J Neurosci 2010; 30:2559-70. [PMID: 20164341 PMCID: PMC2851230 DOI: 10.1523/jneurosci.3047-09.2010] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2009] [Revised: 12/09/2009] [Accepted: 12/31/2009] [Indexed: 11/21/2022] Open
Abstract
Noise, which is ubiquitous in the nervous system, causes trial-to-trial variability in the neural responses to stimuli. This neural variability is in turn a likely source of behavioral variability. Using Hidden Markov modeling, a method of analysis that can make use of such trial-to-trial response variability, we have uncovered sequences of discrete states of neural activity in gustatory cortex during taste processing. Here, we advance our understanding of these patterns in two ways. First, we reproduce the experimental findings in a formal model, describing a network that evinces sharp transitions between discrete states that are deterministically stable given sufficient noise in the network; as in the empirical data, the transitions occur at variable times across trials, but the stimulus-specific sequence is itself reliable. Second, we demonstrate that such noise-induced transitions between discrete states can be computationally advantageous in a reduced, decision-making network. The reduced network produces binary outputs, which represent classification of ingested substances as palatable or nonpalatable, and the corresponding behavioral responses of "spit" or "swallow". We evaluate the performance of the network by measuring how reliably its outputs follow small biases in the strengths of its inputs. We compare two modes of operation: deterministic integration ("ramping") versus stochastic decision-making ("jumping"), the latter of which relies on state-to-state transitions. We find that the stochastic mode of operation can be optimal under typical levels of internal noise and that, within this mode, addition of random noise to each input can improve optimal performance when decisions must be made in limited time.
Collapse
Affiliation(s)
- Paul Miller
- Department of Biology, Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02453, USA.
| | | |
Collapse
|
179
|
Licking-induced synchrony in the taste-reward circuit improves cue discrimination during learning. J Neurosci 2010; 30:287-303. [PMID: 20053910 DOI: 10.1523/jneurosci.0855-09.2010] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Animals learn which foods to ingest and which to avoid. Despite many studies, the electrophysiological correlates underlying this behavior at the gustatory-reward circuit level remain poorly understood. For this reason, we measured the simultaneous electrical activity of neuronal ensembles in the orbitofrontal cortex, insular cortex, amygdala, and nucleus accumbens while rats licked for taste cues and learned to perform a taste discrimination go/no-go task. This study revealed that rhythmic licking entrains the activity in all these brain regions, suggesting that the animal's licking acts as an "internal clock signal" against which single spikes can be synchronized. That is, as animals learned a go/no-go task, there were increases in the number of licking coherent neurons as well as synchronous spiking between neuron pairs from different brain regions. Moreover, a subpopulation of gustatory cue-selective neurons that fired in synchrony with licking exhibited a greater ability to discriminate among tastants than nonsynchronized neurons. This effect was seen in all four recorded areas and increased markedly after learning, particularly after the cue was delivered and before the animals made a movement to obtain an appetitive or aversive tastant. Overall, these results show that, throughout a large segment of the taste-reward circuit, appetitive and aversive associative learning improves spike-timing precision, suggesting that proficiency in solving a taste discrimination go/no-go task requires licking-induced neural ensemble synchronous activity.
Collapse
|
180
|
Mishchenko Y. On optical detection of densely labeled synapses in neuropil and mapping connectivity with combinatorially multiplexed fluorescent synaptic markers. PLoS One 2010; 5:e8853. [PMID: 20107507 PMCID: PMC2809746 DOI: 10.1371/journal.pone.0008853] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Accepted: 12/24/2009] [Indexed: 11/19/2022] Open
Abstract
We propose a new method for mapping neural connectivity optically, by utilizing Cre/Lox system Brainbow to tag synapses of different neurons with random mixtures of different fluorophores, such as GFP, YFP, etc., and then detecting patterns of fluorophores at different synapses using light microscopy (LM). Such patterns will immediately report the pre- and post-synaptic cells at each synaptic connection, without tracing neural projections from individual synapses to corresponding cell bodies. We simulate fluorescence from a population of densely labeled synapses in a block of hippocampal neuropil, completely reconstructed from electron microscopy data, and show that high-end LM is able to detect such patterns with over 95% accuracy. We conclude, therefore, that with the described approach neural connectivity in macroscopically large neural circuits can be mapped with great accuracy, in scalable manner, using fast optical tools, and straightforward image processing. Relying on an electron microscopy dataset, we also derive and explicitly enumerate the conditions that should be met to allow synaptic connectivity studies with high-resolution optical tools.
Collapse
Affiliation(s)
- Yuriy Mishchenko
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, New York, USA.
| |
Collapse
|
181
|
Licking-induced synchrony in the taste-reward circuit improves cue discrimination during learning. J Neurosci 2010. [PMID: 20053910 DOI: 10.1523/jneurosci.0855‐09.2010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Animals learn which foods to ingest and which to avoid. Despite many studies, the electrophysiological correlates underlying this behavior at the gustatory-reward circuit level remain poorly understood. For this reason, we measured the simultaneous electrical activity of neuronal ensembles in the orbitofrontal cortex, insular cortex, amygdala, and nucleus accumbens while rats licked for taste cues and learned to perform a taste discrimination go/no-go task. This study revealed that rhythmic licking entrains the activity in all these brain regions, suggesting that the animal's licking acts as an "internal clock signal" against which single spikes can be synchronized. That is, as animals learned a go/no-go task, there were increases in the number of licking coherent neurons as well as synchronous spiking between neuron pairs from different brain regions. Moreover, a subpopulation of gustatory cue-selective neurons that fired in synchrony with licking exhibited a greater ability to discriminate among tastants than nonsynchronized neurons. This effect was seen in all four recorded areas and increased markedly after learning, particularly after the cue was delivered and before the animals made a movement to obtain an appetitive or aversive tastant. Overall, these results show that, throughout a large segment of the taste-reward circuit, appetitive and aversive associative learning improves spike-timing precision, suggesting that proficiency in solving a taste discrimination go/no-go task requires licking-induced neural ensemble synchronous activity.
Collapse
|
182
|
Abstract
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Collapse
Affiliation(s)
- Robert Haslinger
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA.
| | | | | |
Collapse
|
183
|
Chersi F, Rigotti M, Fusi S. Power-law autocorrelation of neural activity in models of mental states that are hierarchically organized. BMC Neurosci 2009. [DOI: 10.1186/1471-2202-10-s1-p292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
184
|
Chen Z, Vijayan S, Barbieri R, Wilson MA, Brown EN. Discrete- and continuous-time probabilistic models and algorithms for inferring neuronal UP and DOWN states. Neural Comput 2009; 21:1797-862. [PMID: 19323637 PMCID: PMC2799196 DOI: 10.1162/neco.2009.06-08-799] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
UP and DOWN states, the periodic fluctuations between increased and decreased spiking activity of a neuronal population, are a fundamental feature of cortical circuits. Understanding UP-DOWN state dynamics is important for understanding how these circuits represent and transmit information in the brain. To date, limited work has been done on characterizing the stochastic properties of UP-DOWN state dynamics. We present a set of Markov and semi-Markov discrete- and continuous-time probability models for estimating UP and DOWN states from multiunit neural spiking activity. We model multiunit neural spiking activity as a stochastic point process, modulated by the hidden (UP and DOWN) states and the ensemble spiking history. We estimate jointly the hidden states and the model parameters by maximum likelihood using an expectation-maximization (EM) algorithm and a Monte Carlo EM algorithm that uses reversible-jump Markov chain Monte Carlo sampling in the E-step. We apply our models and algorithms in the analysis of both simulated multiunit spiking activity and actual multi- unit spiking activity recorded from primary somatosensory cortex in a behaving rat during slow-wave sleep. Our approach provides a statistical characterization of UP-DOWN state dynamics that can serve as a basis for verifying and refining mechanistic descriptions of this process.
Collapse
Affiliation(s)
- Zhe Chen
- Neuroscience Statistics Research Laboratory, Department of Anesthesia and Critical Care, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, U.S.A., and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A
| | - Sujith Vijayan
- Program in Neuroscience, Harvard University, Cambridge, MA 02139, U.S.A., and Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A
| | - Riccardo Barbieri
- Neuroscience Statistics Research Laboratory, Department of Anesthesia and Critical Care, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, U.S.A
| | - Matthew A. Wilson
- Picower Institute for Learning and Memory, RIKEN-MIT Neuroscience Research Center, Department of Brain and Cognitive Sciences and Department of Biology, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A
| | - Emery N. Brown
- Neuroscience Statistics Research Laboratory, Department of Anesthesia and Critical Care, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, U.S.A., Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA 02139, U.S.A., and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A
| |
Collapse
|
185
|
Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J Neurophysiol 2009; 102:614-35. [PMID: 19357332 PMCID: PMC2712272 DOI: 10.1152/jn.90941.2008] [Citation(s) in RCA: 338] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2008] [Accepted: 03/24/2009] [Indexed: 11/22/2022] Open
Abstract
We consider the problem of extracting smooth, low-dimensional neural trajectories that summarize the activity recorded simultaneously from many neurons on individual experimental trials. Beyond the benefit of visualizing the high-dimensional, noisy spiking activity in a compact form, such trajectories can offer insight into the dynamics of the neural circuitry underlying the recorded activity. Current methods for extracting neural trajectories involve a two-stage process: the spike trains are first smoothed over time, then a static dimensionality-reduction technique is applied. We first describe extensions of the two-stage methods that allow the degree of smoothing to be chosen in a principled way and that account for spiking variability, which may vary both across neurons and across time. We then present a novel method for extracting neural trajectories-Gaussian-process factor analysis (GPFA)-which unifies the smoothing and dimensionality-reduction operations in a common probabilistic framework. We applied these methods to the activity of 61 neurons recorded simultaneously in macaque premotor and motor cortices during reach planning and execution. By adopting a goodness-of-fit metric that measures how well the activity of each neuron can be predicted by all other recorded neurons, we found that the proposed extensions improved the predictive ability of the two-stage methods. The predictive ability was further improved by going to GPFA. From the extracted trajectories, we directly observed a convergence in neural state during motor planning, an effect that was shown indirectly by previous studies. We then show how such methods can be a powerful tool for relating the spiking activity across a neural population to the subject's behavior on a single-trial basis. Finally, to assess how well the proposed methods characterize neural population activity when the underlying time course is known, we performed simulations that revealed that GPFA performed tens of percent better than the best two-stage method.
Collapse
Affiliation(s)
- Byron M Yu
- Department of Electrical Engineering, Neurosciences Program, Stanford University, Stanford, CA, USA
| | | | | | | | | | | |
Collapse
|
186
|
Abstract
The manner in which hippocampus processes neural signals is thought to be central to the memory encoding process. A theoretically oriented literature has suggested that this is carried out via "attractors" or distinctive spatio-temporal patterns of activity. However, these ideas have not been thoroughly investigated using computational models featuring both realistic single-cell physiology and detailed cell-to-cell connectivity. Here we present a 452 cell simulation based on Traub et al.'s pyramidal cell [Traub RD, Jefferys JG, Miles R, Whittington MA, Toth K. A branching dendritic model of a rodent CA3 pyramidal neurone. J Physiol (Lond) 1994;481:79-95] and interneuron [Traub RD, Miles R, Pyramidal cell-to-inhibitory cell spike transduction explicable by active dendritic conductances in inhibitory cell. J Comput Neurosci 1995;2:291-8] models, incorporating patterns of synaptic connectivity based on an extensive review of the neuroanatomic literature. When stimulated with a one second physiologically realistic input, our simulated tissue shows the ability to hold activity on-line for several seconds; furthermore, its spiking activity, as measured by frequency and interspike interval (ISI) distributions, resembles that of in vivo hippocampus. An interesting emergent property of the system is its tendency to transition from stable state to stable state, a behavior consistent with recent experimental findings [Sasaki T, Matsuki N, Ikegaya Y. Metastability of active CA3 networks. J Neurosci 2007;27:517-28]. Inspection of spike trains and simulated blockade of K(AHP) channels suggest that this is mediated by spike frequency adaptation. This finding, in conjunction with studies showing that apamin, a K(AHP) channel blocker, enhances the memory consolidation process in laboratory animals, suggests the formation of stable attractor states is central to the process by which memories are encoded. Ways that this methodology could shed light on the etiology of mental illness, such as schizophrenia, are discussed.
Collapse
Affiliation(s)
- Peter J Siekmeier
- Harvard Medical School and McLean Hospital, 115 Mill Street, Belmont, MA 02478, USA.
| |
Collapse
|
187
|
Komarov MA, Osipov GV, Suykens JAK, Rabinovich MI. Numerical studies of slow rhythms emergence in neural microcircuits: bifurcations and stability. CHAOS (WOODBURY, N.Y.) 2009; 19:015107. [PMID: 19335011 DOI: 10.1063/1.3096412] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
There is a growing body of evidence that slow brain rhythms are generated by simple inhibitory neural networks. Sequential switching of tonic spiking activity is a widespread phenomenon underlying such rhythms. A realistic generative model explaining such reproducible switching is a dynamical system that employs a closed stable heteroclinic channel (SHC) in its phase space. Despite strong evidence on the existence of SHC, the conditions on its emergence in a spiking network are unclear. In this paper, we analyze a minimal, reciprocally connected circuit of three spiking units and explore all possible dynamical regimes and transitions between them. We show that the SHC arises due to a Neimark-Sacker bifurcation of an unstable cycle.
Collapse
Affiliation(s)
- M A Komarov
- Department of Control Theory, Nizhny Novgorod University, Nizhny Novgorod, Russia
| | | | | | | |
Collapse
|
188
|
Fontanini A, Grossman SE, Figueroa JA, Katz DB. Distinct subtypes of basolateral amygdala taste neurons reflect palatability and reward. J Neurosci 2009; 29:2486-95. [PMID: 19244523 PMCID: PMC2668607 DOI: 10.1523/jneurosci.3898-08.2009] [Citation(s) in RCA: 101] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2008] [Revised: 01/05/2009] [Accepted: 01/14/2009] [Indexed: 11/21/2022] Open
Abstract
The amygdala processes multiple, dissociable properties of sensory stimuli. Given its central location within a dense network of reciprocally connected regions, it is reasonable to expect that basolateral amygdala (BLA) neurons should produce a rich repertoire of dynamical responses to taste stimuli. Here, we examined single BLA neuron taste responses in awake rats and report the existence of two distinct subgroups of BLA taste neurons operating simultaneously during perceptual processing. One neuron type produced long, protracted responses with dynamics that were strikingly similar to those previously observed in gustatory cortex. These responses reflect cooperation between amygdala and cortex for the purposes of processing palatability. A second type of BLA taste neuron may be part of the system often described as being responsible for reward learning: these neurons produced very brief, short-latency responses to rewarding stimuli; when the rat participated in procuring the taste by pressing a lever in response to a tone, however, those phasic taste responses vanished, phasic responses to the tone appearing instead. Our data provide strong evidence that the neural handling of taste is actually a distributed set of processes and that BLA is a nexus of these multiple processes. These results offer new insights into how amygdala imbues naturalistic sensory stimuli with value.
Collapse
Affiliation(s)
- Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, New York 11794
| | - Stephen E. Grossman
- Volen National Center for Complex Systems
- Program in Neuroscience, Brandeis University, Waltham, Massachusetts 02454, and
| | | | - Donald B. Katz
- Volen National Center for Complex Systems
- Department of Psychology, and
- Program in Neuroscience, Brandeis University, Waltham, Massachusetts 02454, and
| |
Collapse
|
189
|
Polyakov F, Stark E, Drori R, Abeles M, Flash T. Parabolic movement primitives and cortical states: merging optimality with geometric invariance. BIOLOGICAL CYBERNETICS 2009; 100:159-184. [PMID: 19152065 DOI: 10.1007/s00422-008-0287-0] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2008] [Accepted: 12/15/2008] [Indexed: 05/27/2023]
Abstract
Previous studies have suggested that several types of rules govern the generation of complex arm movements. One class of rules consists of optimizing an objective function (e.g., maximizing motion smoothness). Another class consists of geometric and kinematic constraints, for instance the coupling between speed and curvature during drawing movements as expressed by the two-thirds power law. It has also been suggested that complex movements are composed of simpler elements or primitives. However, the ability to unify the different rules has remained an open problem. We address this issue by identifying movement paths whose generation according to the two-thirds power law yields maximally smooth trajectories. Using equi-affine differential geometry we derive a mathematical condition which these paths must obey. Among all possible solutions only parabolic paths minimize hand jerk, obey the two-thirds power law and are invariant under equi-affine transformations (which preserve the fit to the two-thirds power law). Affine transformations can be used to generate any parabolic stroke from an arbitrary parabolic template, and a few parabolic strokes may be concatenated to compactly form a complex path. To test the possibility that parabolic elements are used to generate planar movements, we analyze monkeys' scribbling trajectories. Practiced scribbles are well approximated by long parabolic strokes. Of the motor cortical neurons recorded during scribbling more were related to equi-affine than to Euclidean speed. Unsupervised segmentation of simulta- neously recorded multiple neuron activity yields states related to distinct parabolic elements. We thus suggest that the cortical representation of movements is state-dependent and that parabolic elements are building blocks used by the motor system to generate complex movements.
Collapse
Affiliation(s)
- Felix Polyakov
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, 76100 Rehovot, Israel.
| | | | | | | | | |
Collapse
|
190
|
Successful choice behavior is associated with distinct and coherent network states in anterior cingulate cortex. Proc Natl Acad Sci U S A 2008; 105:11963-8. [PMID: 18708525 DOI: 10.1073/pnas.0804045105] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Successful decision making requires an ability to monitor contexts, actions, and outcomes. The anterior cingulate cortex (ACC) is thought to be critical for these functions, monitoring and guiding decisions especially in challenging situations involving conflict and errors. A number of different single-unit correlates have been observed in the ACC that reflect the diverse cognitive components involved. Yet how ACC neurons function as an integrated network is poorly understood. Here we show, using advanced population analysis of multiple single-unit recordings from the rat ACC during performance of an ecologically valid decision-making task, that ensembles of neurons move through different coherent and dissociable states as the cognitive requirements of the task change. This organization into distinct network patterns with respect to both firing-rate changes and correlations among units broke down during trials with numerous behavioral errors, especially at choice points of the task. These results point to an underlying functional organization into cell assemblies in the ACC that may monitor choices, outcomes, and task contexts, thus tracking the animal's progression through "task space."
Collapse
|
191
|
Rabinovich M, Huerta R, Laurent G. Neuroscience. Transient dynamics for neural processing. Science 2008; 321:48-50. [PMID: 18599763 DOI: 10.1126/science.1155564] [Citation(s) in RCA: 258] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Misha Rabinovich
- Institute for Nonlinear Science, University of California at San Diego, La Jolla, CA 92093, USA
| | | | | |
Collapse
|
192
|
Fontanini A, Katz DB. Behavioral states, network states, and sensory response variability. J Neurophysiol 2008; 100:1160-8. [PMID: 18614753 DOI: 10.1152/jn.90592.2008] [Citation(s) in RCA: 140] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We review data demonstrating that single-neuron sensory responses change with the states of the neural networks (indexed in terms of spectral properties of local field potentials) in which those neurons are embedded. We start with broad network changes--different levels of anesthesia and sleep--and then move to studies demonstrating that the sensory response plasticity associated with attention and experience can also be conceptualized as functions of network state changes. This leads naturally to the recent data that can be interpreted to suggest that even brief experience can change sensory responses via changes in network states and that trial-to-trial variability in sensory responses is a nonrandom function of network fluctuations, as well. We suggest that the CNS may have evolved specifically to deal with stimulus variability and that the coupling with network states may be central to sensory processing.
Collapse
Affiliation(s)
- Alfredo Fontanini
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA.
| | | |
Collapse
|
193
|
Balduzzi D, Tononi G. Integrated information in discrete dynamical systems: motivation and theoretical framework. PLoS Comput Biol 2008; 4:e1000091. [PMID: 18551165 PMCID: PMC2386970 DOI: 10.1371/journal.pcbi.1000091] [Citation(s) in RCA: 181] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2007] [Accepted: 04/29/2008] [Indexed: 11/19/2022] Open
Abstract
This paper introduces a time- and state-dependent measure of integrated information, phi, which captures the repertoire of causal states available to a system as a whole. Specifically, phi quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) phi varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) phi varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) phi varies as a function of network architecture. High phi values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high phi because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high phi but are inefficient. (iv) In Hopfield networks, phi is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, phi appears to be a useful metric to characterize the capacity of any physical system to integrate information.
Collapse
Affiliation(s)
- David Balduzzi
- Department of Psychiatry, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Giulio Tononi
- Department of Psychiatry, University of Wisconsin, Madison, Wisconsin, United States of America
| |
Collapse
|
194
|
Rabinovich MI, Huerta R, Varona P, Afraimovich VS. Transient cognitive dynamics, metastability, and decision making. PLoS Comput Biol 2008; 4:e1000072. [PMID: 18452000 PMCID: PMC2358972 DOI: 10.1371/journal.pcbi.1000072] [Citation(s) in RCA: 184] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2007] [Accepted: 03/27/2008] [Indexed: 12/11/2022] Open
Abstract
The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain.
Collapse
Affiliation(s)
- Mikhail I Rabinovich
- Institute for Nonlinear Science, University of California San Diego, La Jolla, California, United States of America.
| | | | | | | |
Collapse
|
195
|
Verhagen JV, Katz DB. More Time to Taste. Focus on “Variability in Responses and Temporal Coding of Tastants of Similar Quality in the Nucleus of the Solitary Tract of the Rat”. J Neurophysiol 2008; 99:413-4. [DOI: 10.1152/jn.01285.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|