201
|
Heinzle J, Allefeld C, Haynes JD. Information flow, dynamical systems theory and the human brain: Comment on "Information flow dynamics in the brain" by M.I. Rabinovich et al. Phys Life Rev 2011; 9:78-9; discussion 80-3. [PMID: 22237168 DOI: 10.1016/j.plrev.2011.12.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2011] [Accepted: 12/26/2011] [Indexed: 11/16/2022]
Affiliation(s)
- Jakob Heinzle
- Bernstein Center for Computational Neuroscience, Charité - Universitätsmedizin Berlin, Germany.
| | | | | |
Collapse
|
202
|
Free energy, value, and attractors. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2011; 2012:937860. [PMID: 22229042 PMCID: PMC3249597 DOI: 10.1155/2012/937860] [Citation(s) in RCA: 79] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Accepted: 09/07/2011] [Indexed: 11/18/2022]
Abstract
It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive. However, the free energy account does not invoke reward or cost-functions from reinforcement-learning and optimal control theory. We therefore ask whether reward is necessary to explain adaptive behaviour. The free energy formulation uses ideas from statistical physics to explain action in terms of minimising sensory surprise. Conversely, reinforcement-learning has its roots in behaviourism and engineering and assumes that agents optimise a policy to maximise future reward. This paper tries to connect the two formulations and concludes that optimal policies correspond to empirical priors on the trajectories of hidden environmental states, which compel agents to seek out the (valuable) states they expect to encounter.
Collapse
|
203
|
Friston K. Competitive dynamics in the brain: Comment on "Information flow dynamics in the brain" by M.I. Rabinovich et al. Phys Life Rev 2011; 9:76-7; discussion 80-3. [PMID: 22197528 DOI: 10.1016/j.plrev.2011.12.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Accepted: 12/20/2011] [Indexed: 10/14/2022]
Affiliation(s)
- Karl Friston
- Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London WC1N 3BG, United Kingdom.
| |
Collapse
|
204
|
Quan A, Osorio I, Ohira T, Milton J. Vulnerability to paroxysmal oscillations in delayed neural networks: a basis for nocturnal frontal lobe epilepsy? CHAOS (WOODBURY, N.Y.) 2011; 21:047512. [PMID: 22225386 PMCID: PMC3258285 DOI: 10.1063/1.3664409] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2011] [Accepted: 11/08/2011] [Indexed: 05/31/2023]
Abstract
Resonance can occur in bistable dynamical systems due to the interplay between noise and delay (τ) in the absence of a periodic input. We investigate resonance in a two-neuron model with mutual time-delayed inhibitory feedback. For appropriate choices of the parameters and inputs three fixed-point attractors co-exist: two are stable and one is unstable. In the absence of noise, delay-induced transient oscillations (referred to herein as DITOs) arise whenever the initial function is tuned sufficiently close to the unstable fixed-point. In the presence of noisy perturbations, DITOs arise spontaneously. Since the correlation time for the stationary dynamics is ∼τ, we approximated a higher order Markov process by a three-state Markov chain model by rescaling time as t → 2sτ, identifying the states based on whether the sub-intervals were completely confined to one basin of attraction (the two stable attractors) or straddled the separatrix, and then determining the transition probability matrix empirically. The resultant Markov chain model captured the switching behaviors including the statistical properties of the DITOs. Our observations indicate that time-delayed and noisy bistable dynamical systems are prone to generate DITOs as switches between the two attractors occur. Bistable systems arise transiently in situations when one attractor is gradually replaced by another. This may explain, for example, why seizures in certain epileptic syndromes tend to occur as sleep stages change.
Collapse
Affiliation(s)
- Austin Quan
- Department of Mathematics, Harvey Mudd College, Claremont, California 91711, USA
| | | | | | | |
Collapse
|
205
|
Buckley CL, Nowotny T. Transient dynamics between displaced fixed points: an alternate nonlinear dynamical framework for olfaction. BMC Neurosci 2011. [PMCID: PMC3240342 DOI: 10.1186/1471-2202-12-s1-p237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
206
|
Abstract
This paper summarizes our recent attempts to integrate action and perception within a single optimization framework. We start with a statistical formulation of Helmholtz's ideas about neural energy to furnish a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring the causes of our sensory inputs and learning regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organization and responses. We will demonstrate the brain-like dynamics that this scheme entails by using models of birdsongs that are based on chaotic attractors with autonomous dynamics. This provides a nice example of how non-linear dynamics can be exploited by the brain to represent and predict dynamics in the environment.
Collapse
Affiliation(s)
- KARL FRISTON
- The Wellcome Trust Centre of Neuroimaging, University College London, Queen Square, London WC1N 3BG, London
| | - STEFAN KIEBEL
- The Wellcome Trust Centre of Neuroimaging, University College London, Queen Square, London WC1N 3BG, London
| |
Collapse
|
207
|
Horikawa Y. Exponential transient propagating oscillations in a ring of spiking neurons with unidirectional slow inhibitory synaptic coupling. J Theor Biol 2011; 289:151-9. [PMID: 21893072 DOI: 10.1016/j.jtbi.2011.08.025] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2011] [Revised: 07/13/2011] [Accepted: 08/20/2011] [Indexed: 11/15/2022]
Abstract
Transient oscillations in a ring of spiking neuron models unidirectionally coupled with slow inhibitory synapses are studied. There are stable spatially fixed steady firing-resting states and unstable symmetric propagating firing-resting states. In transients, firing-resting patterns rotate in the direction of coupling (propagating oscillations), the duration of which increases exponentially with the number of neurons (exponential transients). Further, the duration of randomly generated transient propagating oscillations is distributed in a power law form and spatiotemporal noise of intermediate strength sustains propagating oscillations. These properties agree with those of transient propagating waves in a ring of sigmoidal neuron models.
Collapse
Affiliation(s)
- Yo Horikawa
- Faculty of Engineering, Kagawa University, Takamatsu 761-0396, Japan.
| |
Collapse
|
208
|
Neuronal filtering of multiplexed odour representations. Nature 2011; 479:493-8. [PMID: 22080956 DOI: 10.1038/nature10633] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2011] [Accepted: 10/13/2011] [Indexed: 01/13/2023]
Abstract
Neuronal activity patterns contain information in their temporal structure, indicating that information transfer between neurons may be optimized by temporal filtering. In the zebrafish olfactory bulb, subsets of output neurons (mitral cells) engage in synchronized oscillations during odour responses, but information about odour identity is contained mostly in non-oscillatory firing rate patterns. Using optogenetic manipulations and odour stimulation, we found that firing rate responses of neurons in the posterior zone of the dorsal telencephalon (Dp), a target area homologous to olfactory cortex, were largely insensitive to oscillatory synchrony of mitral cells because passive membrane properties and synaptic currents act as low-pass filters. Nevertheless, synchrony influenced spike timing. Moreover, Dp neurons responded primarily during the decorrelated steady state of mitral cell activity patterns. Temporal filtering therefore tunes Dp neurons to components of mitral cell activity patterns that are particularly informative about precise odour identity. These results demonstrate how temporal filtering can extract specific information from multiplexed neuronal codes.
Collapse
|
209
|
Gupta N, Stopfer M. Insect olfactory coding and memory at multiple timescales. Curr Opin Neurobiol 2011; 21:768-73. [PMID: 21632235 PMCID: PMC3182293 DOI: 10.1016/j.conb.2011.05.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2011] [Revised: 05/02/2011] [Accepted: 05/05/2011] [Indexed: 11/20/2022]
Abstract
Insects can learn, allowing them great flexibility for locating seasonal food sources and avoiding wily predators. Because insects are relatively simple and accessible to manipulation, they provide good experimental preparations for exploring mechanisms underlying sensory coding and memory. Here we review how the intertwining of memory with computation enables the coding, decoding, and storage of sensory experience at various stages of the insect olfactory system. Individual parts of this system are capable of multiplexing memories at different timescales, and conversely, memory on a given timescale can be distributed across different parts of the circuit. Our sampling of the olfactory system emphasizes the diversity of memories, and the importance of understanding these memories in the context of computations performed by different parts of a sensory system.
Collapse
|
210
|
Bernroider G, Panksepp J. Mirrors and feelings: Have you seen the actors outside? Neurosci Biobehav Rev 2011; 35:2009-16. [DOI: 10.1016/j.neubiorev.2011.02.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2010] [Revised: 02/15/2011] [Accepted: 02/24/2011] [Indexed: 10/18/2022]
|
211
|
Information processing using a single dynamical node as complex system. Nat Commun 2011; 2:468. [PMID: 21915110 PMCID: PMC3195233 DOI: 10.1038/ncomms1476] [Citation(s) in RCA: 420] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2010] [Accepted: 08/12/2011] [Indexed: 11/21/2022] Open
Abstract
Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. The paradigm of reservoir computing shows that, like the human brain, complex networks can perform efficient information processing. Here, a simple delay dynamical system is demonstrated that can efficiently perform computations capable of replacing a complex network in reservoir computing.
Collapse
|
212
|
Kazantsev VB, Asatryan SY. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 84:031913. [PMID: 22060409 DOI: 10.1103/physreve.84.031913] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2011] [Revised: 07/04/2011] [Indexed: 05/31/2023]
Abstract
Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.
Collapse
Affiliation(s)
- V B Kazantsev
- Institute of Applied Physics of RAS, 46 Uljanov Street, 603950 Nizhny Novgorod, Russia
| | | |
Collapse
|
213
|
Park C, Rubchinsky LL. Intermittent synchronization in a network of bursting neurons. CHAOS (WOODBURY, N.Y.) 2011; 21:033125. [PMID: 21974660 PMCID: PMC3194790 DOI: 10.1063/1.3633078] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Accepted: 08/11/2011] [Indexed: 05/31/2023]
Abstract
Synchronized oscillations in networks of inhibitory and excitatory coupled bursting neurons are common in a variety of neural systems from central pattern generators to human brain circuits. One example of the latter is the subcortical network of the basal ganglia, formed by excitatory and inhibitory bursters of the subthalamic nucleus and globus pallidus, involved in motor control and affected in Parkinson's disease. Recent experiments have demonstrated the intermittent nature of the phase-locking of neural activity in this network. Here, we explore one potential mechanism to explain the intermittent phase-locking in a network. We simplify the network to obtain a model of two inhibitory coupled elements and explore its dynamics. We used geometric analysis and singular perturbation methods for dynamical systems to reduce the full model to a simpler set of equations. Mathematical analysis was completed using three slow variables with two different time scales. Intermittently, synchronous oscillations are generated by overlapped spiking which crucially depends on the geometry of the slow phase plane and the interplay between slow variables as well as the strength of synapses. Two slow variables are responsible for the generation of activity patterns with overlapped spiking, and the other slower variable enhances the robustness of an irregular and intermittent activity pattern. While the analyzed network and the explored mechanism of intermittent synchrony appear to be quite generic, the results of this analysis can be used to trace particular values of biophysical parameters (synaptic strength and parameters of calcium dynamics), which are known to be impacted in Parkinson's disease.
Collapse
Affiliation(s)
- Choongseok Park
- Department of Mathematical Sciences and Center for Mathematical Biosciences, Indiana University Purdue University Indianapolis, Indianapolis, Indiana 46202, USA.
| | | |
Collapse
|
214
|
Buckley CL, Nowotny T. Transient dynamics between displaced fixed points: an alternate nonlinear dynamical framework for olfaction. Brain Res 2011; 1434:62-72. [PMID: 21840510 DOI: 10.1016/j.brainres.2011.07.032] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2011] [Revised: 06/02/2011] [Accepted: 07/14/2011] [Indexed: 12/01/2022]
Abstract
Significant insights into the dynamics of neuronal populations have been gained in the olfactory system where rich spatio-temporal dynamics is observed during, and following, exposure to odours. It is now widely accepted that odour identity is represented in terms of stimulus-specific rate patterning observed in the cells of the antennal lobe (AL). Here we describe a nonlinear dynamical framework inspired by recent experimental findings which provides a compelling account of both the origin and the function of these dynamics. We start by analytically reducing a biologically plausible conductance based model of the AL to a quantitatively equivalent rate model and construct conditions such that the rate dynamics are well described by a single globally stable fixed point (FP). We then describe the AL's response to an odour stimulus as rich transient trajectories between this stable baseline state (the single FP in absence of odour stimulation) and the odour-specific position of the single FP during odour stimulation. We show how this framework can account for three phenomena that are observed experimentally. First, for an inhibitory period often observed immediately after an odour stimulus is removed. Second, for the qualitative differences between the dynamics in the presence and the absence of odour. Lastly, we show how it can account for the invariance of a representation of odour identity to both the duration and intensity of an odour stimulus. We compare and contrast this framework with the currently prevalent nonlinear dynamical framework of 'winnerless competition' which describes AL dynamics in terms of heteroclinic orbits. This article is part of a Special Issue entitled "Neural Coding".
Collapse
Affiliation(s)
- Christopher L Buckley
- Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton BN1 9QJ, UK.
| | | |
Collapse
|
215
|
Yamanobe T. Stochastic phase transition operator. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 84:011924. [PMID: 21867230 DOI: 10.1103/physreve.84.011924] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2011] [Revised: 04/18/2011] [Indexed: 05/31/2023]
Abstract
In this study a Markov operator is introduced that represents the density evolution of an impulse-driven stochastic biological oscillator. The operator's stochastic kernel is constructed using the asymptotic expansion of stochastic processes instead of solving the Fokker-Planck equation. The Markov operator is shown to successfully approximate the density evolution of the biological oscillator considered. The response of the oscillator to both periodic and time-varying impulses can be analyzed using the operator's transient and stationary properties. Furthermore, an unreported stochastic dynamic bifurcation for the biological oscillator is obtained by using the eigenvalues of the product of the Markov operators.
Collapse
Affiliation(s)
- Takanobu Yamanobe
- Hokkaido University School of Medicine, North 15, West 7, Kita-ku, Sapporo 060-8638, Japan.
| |
Collapse
|
216
|
Rabinovich MI, Varona P. Robust transient dynamics and brain functions. Front Comput Neurosci 2011; 5:24. [PMID: 21716642 PMCID: PMC3116137 DOI: 10.3389/fncom.2011.00024] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2010] [Accepted: 05/09/2011] [Indexed: 11/13/2022] Open
Abstract
In the last few decades several concepts of dynamical systems theory (DST) have guided psychologists, cognitive scientists, and neuroscientists to rethink about sensory motor behavior and embodied cognition. A critical step in the progress of DST application to the brain (supported by modern methods of brain imaging and multi-electrode recording techniques) has been the transfer of its initial success in motor behavior to mental function, i.e., perception, emotion, and cognition. Open questions from research in genetics, ecology, brain sciences, etc., have changed DST itself and lead to the discovery of a new dynamical phenomenon, i.e., reproducible and robust transients that are at the same time sensitive to informational signals. The goal of this review is to describe a new mathematical framework - heteroclinic sequential dynamics - to understand self-organized activity in the brain that can explain certain aspects of robust itinerant behavior. Specifically, we discuss a hierarchy of coarse-grain models of mental dynamics in the form of kinetic equations of modes. These modes compete for resources at three levels: (i) within the same modality, (ii) among different modalities from the same family (like perception), and (iii) among modalities from different families (like emotion and cognition). The analysis of the conditions for robustness, i.e., the structural stability of transient (sequential) dynamics, give us the possibility to explain phenomena like the finite capacity of our sequential working memory - a vital cognitive function -, and to find specific dynamical signatures - different kinds of instabilities - of several brain functions and mental diseases.
Collapse
|
217
|
Park C, Worth RM, Rubchinsky LL. Neural dynamics in parkinsonian brain: the boundary between synchronized and nonsynchronized dynamics. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 83:042901. [PMID: 21599224 PMCID: PMC3100589 DOI: 10.1103/physreve.83.042901] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2010] [Indexed: 05/22/2023]
Abstract
Synchronous oscillatory dynamics is frequently observed in the human brain. We analyze the fine temporal structure of phase-locking in a realistic network model and match it with the experimental data from Parkinsonian patients. We show that the experimentally observed intermittent synchrony can be generated just by moderately increased coupling strength in the basal ganglia circuits due to the lack of dopamine. Comparison of the experimental and modeling data suggest that brain activity in Parkinson's disease resides in the large boundary region between synchronized and nonsynchronized dynamics. Being on the edge of synchrony may allow for easy formation of transient neuronal assemblies.
Collapse
Affiliation(s)
- Choongseok Park
- Department of Mathematical Sciences and Center for Mathematical Biosciences, Indiana University Purdue University Indianapolis, Indianapolis, Indiana 46202, USA
| | | | | |
Collapse
|
218
|
|
219
|
Kay RB, Meyer EA, Illig KR, Brunjes PC. Spatial distribution of neural activity in the anterior olfactory nucleus evoked by odor and electrical stimulation. J Comp Neurol 2011; 519:277-89. [PMID: 21165975 DOI: 10.1002/cne.22519] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Several lines of evidence indicate that complex odorant stimuli are parsed into separate data streams in the glomeruli of the olfactory bulb, yielding a combinatorial "odotopic map." However, this pattern does not appear to be maintained in the piriform cortex, where stimuli appear to be coded in a distributed fashion. The anterior olfactory nucleus (AON) is intermediate and reciprocally interconnected between these two structures, and also provides a route for the interhemispheric transfer of olfactory information. The present study examined potential coding strategies used by the AON. Rats were exposed to either caproic acid, butyric acid, limonene, or purified air and the spatial distribution of Fos-immunolabeled cells was quantified. The two major subregions of the AON exhibited different results. Distinct odor-specific spatial patterns of activity were observed in pars externa, suggesting that it employs a topographic strategy for odor representation similar to the olfactory bulb. A spatially distributed pattern that did not appear to depend on odor identity was observed in pars principalis, suggesting that it employs a distributed representation of odors more similar to that seen in the piriform cortex.
Collapse
Affiliation(s)
- Rachel B Kay
- Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA
| | | | | | | |
Collapse
|
220
|
Kurikawa T, Kaneko K. Learning shapes spontaneous activity itinerating over memorized states. PLoS One 2011; 6:e17432. [PMID: 21408170 PMCID: PMC3050897 DOI: 10.1371/journal.pone.0017432] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2011] [Accepted: 01/31/2011] [Indexed: 12/22/2022] Open
Abstract
Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Department of Basic Science, University of Tokyo, Tokyo, Japan.
| | | |
Collapse
|
221
|
Oşan R, Chen G, Feng R, Tsien JZ. Differential consolidation and pattern reverberations within episodic cell assemblies in the mouse hippocampus. PLoS One 2011; 6:e16507. [PMID: 21347227 PMCID: PMC3039647 DOI: 10.1371/journal.pone.0016507] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2010] [Accepted: 01/02/2011] [Indexed: 11/19/2022] Open
Abstract
One hallmark feature of consolidation of episodic memory is that only a fraction of original information, which is usually in a more abstract form, is selected for long-term memory storage. How does the brain perform these differential memory consolidations? To investigate the neural network mechanism that governs this selective consolidation process, we use a set of distinct fearful events to study if and how hippocampal CA1 cells engage in selective memory encoding and consolidation. We show that these distinct episodes activate a unique assembly of CA1 episodic cells, or neural cliques, whose response-selectivity ranges from general-to-specific features. A series of parametric analyses further reveal that post-learning CA1 episodic pattern replays or reverberations are mostly mediated by cells exhibiting event intensity-invariant responses, not by the intensity-sensitive cells. More importantly, reactivation cross-correlations displayed by intensity-invariant cells encoding general episodic features during immediate post-learning period tend to be stronger than those displayed by invariant cells encoding specific features. These differential reactivations within the CA1 episodic cell populations can thus provide the hippocampus with a selection mechanism to consolidate preferentially more generalized knowledge for long-term memory storage.
Collapse
Affiliation(s)
- Remus Oşan
- Department of Pharmacology and Department of
Mathematics and Statistics, Boston University, Boston, Massachusetts, United
States of America
| | - Guifen Chen
- Brain and Behavior Discovery Institute and
Department of Neurology, MCG, Georgia Health Sciences University, Augusta,
Georgia, United States of America
| | - Ruiben Feng
- Brain and Behavior Discovery Institute and
Department of Neurology, MCG, Georgia Health Sciences University, Augusta,
Georgia, United States of America
| | - Joe Z. Tsien
- Brain and Behavior Discovery Institute and
Department of Neurology, MCG, Georgia Health Sciences University, Augusta,
Georgia, United States of America
| |
Collapse
|
222
|
Friston K, Mattout J, Kilner J. Action understanding and active inference. BIOLOGICAL CYBERNETICS 2011; 104:137-60. [PMID: 21327826 PMCID: PMC3491875 DOI: 10.1007/s00422-011-0424-z] [Citation(s) in RCA: 360] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2010] [Accepted: 01/31/2011] [Indexed: 05/14/2023]
Abstract
We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action-observation. These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, Queen Square, UK.
| | | | | |
Collapse
|
223
|
Chen LL, Madhavan R, Rapoport BI, Anderson WS. Real-time brain oscillation detection and phase-locked stimulation using autoregressive spectral estimation and time-series forward prediction. IEEE Trans Biomed Eng 2011; 60:753-62. [PMID: 21292589 DOI: 10.1109/tbme.2011.2109715] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neural oscillations are important features in a working central nervous system, facilitating efficient communication across large networks of neurons. They are implicated in a diverse range of processes such as synchronization and synaptic plasticity, and can be seen in a variety of cognitive processes. For example, hippocampal theta oscillations are thought to be a crucial component of memory encoding and retrieval. To better study the role of these oscillations in various cognitive processes, and to be able to build clinical applications around them, accurate and precise estimations of the instantaneous frequency and phase are required. Here, we present methodology based on autoregressive modeling to accomplish this in real time. This allows the targeting of stimulation to a specific phase of a detected oscillation. We first assess performance of the algorithm on two signals where the exact phase and frequency are known. Then, using intracranial EEG recorded from two patients performing a Sternberg memory task, we characterize our algorithm's phase-locking performance on physiologic theta oscillations: optimizing algorithm parameters on the first patient using a genetic algorithm, we carried out cross-validation procedures on subsequent trials and electrodes within the same patient, as well as on data recorded from the second patient.
Collapse
Affiliation(s)
- L Leon Chen
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA.
| | | | | | | |
Collapse
|
224
|
Lai D, Brandt S, Luksch H, Wessel R. Recurrent antitopographic inhibition mediates competitive stimulus selection in an attention network. J Neurophysiol 2010; 105:793-805. [PMID: 21160008 DOI: 10.1152/jn.00673.2010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Topographically organized neurons represent multiple stimuli within complex visual scenes and compete for subsequent processing in higher visual centers. The underlying neural mechanisms of this process have long been elusive. We investigate an experimentally constrained model of a midbrain structure: the optic tectum and the reciprocally connected nucleus isthmi. We show that a recurrent antitopographic inhibition mediates the competitive stimulus selection between distant sensory inputs in this visual pathway. This recurrent antitopographic inhibition is fundamentally different from surround inhibition in that it projects on all locations of its input layer, except to the locus from which it receives input. At a larger scale, the model shows how a focal top-down input from a forebrain region, the arcopallial gaze field, biases the competitive stimulus selection via the combined activation of a local excitation and the recurrent antitopographic inhibition. Our findings reveal circuit mechanisms of competitive stimulus selection and should motivate a search for anatomical implementations of these mechanisms in a range of vertebrate attentional systems.
Collapse
Affiliation(s)
- Dihui Lai
- Department of Physics, Washington University, St. Louis, MO 63130, USA.
| | | | | | | |
Collapse
|
225
|
Cury KM, Uchida N. Robust odor coding via inhalation-coupled transient activity in the mammalian olfactory bulb. Neuron 2010; 68:570-85. [PMID: 21040855 DOI: 10.1016/j.neuron.2010.09.040] [Citation(s) in RCA: 198] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2010] [Indexed: 11/16/2022]
Abstract
It has been proposed that a single sniff generates a "snapshot" of the olfactory world. However, odor coding on this timescale is poorly understood, and it is not known whether coding is invariant to changes in respiration frequency. We investigated this by recording spike trains from the olfactory bulb in awake, behaving rats. During rapid sniffing, odor inhalation triggered rapid and reliable cell- and odor-specific temporal spike patterns. These fine temporal responses conveyed substantial odor information within the first ∼100 ms, and correlated with behavioral discrimination time on a trial-by-trial basis. Surprisingly, the initial transient portions of responses were highly conserved between rapid sniffing and slow breathing. Firing rates over the entire respiration cycle carried less odor information, did not correlate with behavior, and were poorly conserved across respiration frequency. These results suggest that inhalation-coupled transient activity forms a robust neural code that is invariant to changes in respiration behavior.
Collapse
Affiliation(s)
- Kevin M Cury
- Department of Molecular and Cellular Biology, Harvard University, Center for Brain Science, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | | |
Collapse
|
226
|
Komarov MA, Osipov GV, Burtsev MS. Adaptive functional systems: learning with chaos. CHAOS (WOODBURY, N.Y.) 2010; 20:045119. [PMID: 21198131 DOI: 10.1063/1.3521250] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We propose a new model of adaptive behavior that combines a winnerless competition principle and chaos to learn new functional systems. The model consists of a complex network of nonlinear dynamical elements producing sequences of goal-directed actions. Each element describes dynamics and activity of the functional system which is supposed to be a distributed set of interacting physiological elements such as nerve or muscle that cooperates to obtain certain goal at the level of the whole organism. During "normal" behavior, the dynamics of the system follows heteroclinic channels, but in the novel situation chaotic search is activated and a new channel leading to the target state is gradually created simulating the process of learning. The model was tested in single and multigoal environments and had demonstrated a good potential for generation of new adaptations.
Collapse
Affiliation(s)
- M A Komarov
- Department of Control Theory, Nizhny Novgorod State University, Gagarin Avenue 23, 603950 Nizhny Novgorod, Russia
| | | | | |
Collapse
|
227
|
Abstract
A widely discussed hypothesis in neuroscience is that transiently active ensembles of neurons, known as "cell assemblies," underlie numerous operations of the brain, from encoding memories to reasoning. However, the mechanisms responsible for the formation and disbanding of cell assemblies and temporal evolution of cell assembly sequences are not well understood. I introduce and review three interconnected topics, which could facilitate progress in defining cell assemblies, identifying their neuronal organization, and revealing causal relationships between assembly organization and behavior. First, I hypothesize that cell assemblies are best understood in light of their output product, as detected by "reader-actuator" mechanisms. Second, I suggest that the hierarchical organization of cell assemblies may be regarded as a neural syntax. Third, constituents of the neural syntax are linked together by dynamically changing constellations of synaptic weights ("synapsembles"). The existing support for this tripartite framework is reviewed and strategies for experimental testing of its predictions are discussed.
Collapse
Affiliation(s)
- György Buzsáki
- Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, 197 University Avenue, Newark, NJ 07102, USA.
| |
Collapse
|
228
|
Thivierge JP, Cisek P. Spiking neurons that keep the rhythm. J Comput Neurosci 2010; 30:589-605. [PMID: 20886275 DOI: 10.1007/s10827-010-0280-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2009] [Revised: 09/16/2010] [Accepted: 09/21/2010] [Indexed: 10/19/2022]
Abstract
Detecting the temporal relationship among events in the environment is a fundamental goal of the brain. Following pulses of rhythmic stimuli, neurons of the retina and cortex produce activity that closely approximates the timing of an omitted pulse. This omitted stimulus response (OSR) is generally interpreted as a transient response to rhythmic input and is thought to form a basis of short-term perceptual memories. Despite its ubiquity across species and experimental protocols, the mechanisms underlying OSRs remain poorly understood. In particular, the highly transient nature of OSRs, typically limited to a single cycle after stimulation, cannot be explained by a simple mechanism that would remain locked to the frequency of stimulation. Here, we describe a set of realistic simulations that capture OSRs over a range of stimulation frequencies matching experimental work. The model does not require an explicit mechanism for learning temporal sequences. Instead, it relies on spike timing-dependent plasticity (STDP), a form of synaptic modification that is sensitive to the timing of pre- and post-synaptic action potentials. In the model, the transient nature of OSRs is attributed to the heterogeneous nature of neural properties and connections, creating intricate forms of activity that are continuously changing over time. Combined with STDP, neural heterogeneity enabled OSRs to complex rhythmic patterns as well as OSRs following a delay period. These results link the response of neurons to rhythmic patterns with the capacity of heterogeneous circuits to produce transient and highly flexible forms of neural activity.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Bloomington, IN 47405, USA.
| | | |
Collapse
|
229
|
de Franciscis S, Torres JJ, Marro J. Unstable dynamics, nonequilibrium phases, and criticality in networked excitable media. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2010; 82:041105. [PMID: 21230236 DOI: 10.1103/physreve.82.041105] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2010] [Indexed: 05/30/2023]
Abstract
Excitable systems are of great theoretical and practical interest in mathematics, physics, chemistry, and biology. Here, we numerically study models of excitable media, namely, networks whose nodes may occasionally be dormant and the connection weights are allowed to vary with the system activity on a short-time scale, which is a convenient and realistic representation. The resulting global activity is quite sensitive to stimuli and eventually becomes unstable also in the absence of any stimuli. Outstanding consequences of such unstable dynamics are the spontaneous occurrence of various nonequilibrium phases--including associative-memory phases and one in which the global activity wanders irregularly, e.g., chaotically among all or part of the dynamic attractors--and 1/f noise as the system is driven into the phase region corresponding to the most irregular behavior. A net result is resilience which results in an efficient search in the model attractor space that can explain the origin of some observed behavior in neural, genetic, and ill-condensed matter systems. By extensive computer simulation we also address a previously conjectured relation between observed power-law distributions and the possible occurrence of a "critical state" during functionality of, e.g., cortical networks, and describe the precise nature of such criticality in the model which may serve to guide future experiments.
Collapse
Affiliation(s)
- S de Franciscis
- Departmento de Electromagnetismo y Física de la Materia, Institute Carlos I for Theoretical and Computational Physics, University of Granada, Granada, Spain
| | | | | |
Collapse
|
230
|
Klampfl S, Maass W. A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Comput 2010; 22:2979-3035. [PMID: 20858129 DOI: 10.1162/neco_a_00050] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | |
Collapse
|
231
|
Dynamical principles of emotion-cognition interaction: mathematical images of mental disorders. PLoS One 2010; 5:e12547. [PMID: 20877723 PMCID: PMC2943469 DOI: 10.1371/journal.pone.0012547] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 08/11/2010] [Indexed: 01/08/2023] Open
Abstract
The key contribution of this work is to introduce a mathematical framework to understand self-organized dynamics in the brain that can explain certain aspects of itinerant behavior. Specifically, we introduce a model based upon the coupling of generalized Lotka-Volterra systems. This coupling is based upon competition for common resources. The system can be regarded as a normal or canonical form for any distributed system that shows self-organized dynamics that entail winnerless competition. Crucially, we will show that some of the fundamental instabilities that arise in these coupled systems are remarkably similar to endogenous activity seen in the brain (using EEG and fMRI). Furthermore, by changing a small subset of the system's parameters we can produce bifurcations and metastable sequential dynamics changing, which bear a remarkable similarity to pathological brain states seen in psychiatry. In what follows, we will consider the coupling of two macroscopic modes of brain activity, which, in a purely descriptive fashion, we will label as cognitive and emotional modes. Our aim is to examine the dynamical structures that emerge when coupling these two modes and relate them tentatively to brain activity in normal and non-normal states.
Collapse
|
232
|
Afraimovich V, Young T, Muezzinoglu MK, Rabinovich MI. Nonlinear dynamics of emotion-cognition interaction: when emotion does not destroy cognition? Bull Math Biol 2010; 73:266-84. [PMID: 20821062 DOI: 10.1007/s11538-010-9572-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2010] [Accepted: 07/05/2010] [Indexed: 11/26/2022]
Abstract
Emotion (i.e., spontaneous motivation and subsequent implementation of a behavior) and cognition (i.e., problem solving by information processing) are essential to how we, as humans, respond to changes in our environment. Recent studies in cognitive science suggest that emotion and cognition are subserved by different, although heavily integrated, neural systems. Understanding the time-varying relationship of emotion and cognition is a challenging goal with important implications for neuroscience. We formulate here the dynamical model of emotion-cognition interaction that is based on the following principles: (1) the temporal evolution of cognitive and emotion modes are captured by the incoming stimuli and competition within and among themselves (competition principle); (2) metastable states exist in the unified emotion-cognition phase space; and (3) the brain processes information with robust and reproducible transients through the sequence of metastable states. Such a model can take advantage of the often ignored temporal structure of the emotion-cognition interaction to provide a robust and generalizable method for understanding the relationship between brain activation and complex human behavior. The mathematical image of the robust and reproducible transient dynamics is a Stable Heteroclinic Sequence (SHS), and the Stable Heteroclinic Channels (SHCs). These have been hypothesized to be possible mechanisms that lead to the sequential transient behavior observed in networks. We investigate the modularity of SHCs, i.e., given a SHS and a SHC that is supported in one part of a network, we study conditions under which the SHC pertaining to the cognition will continue to function in the presence of interfering activity with other parts of the network, i.e., emotion.
Collapse
|
233
|
Friston KJ, Dolan RJ. Computational and dynamic models in neuroimaging. Neuroimage 2010; 52:752-65. [PMID: 20036335 PMCID: PMC2910283 DOI: 10.1016/j.neuroimage.2009.12.068] [Citation(s) in RCA: 108] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2009] [Revised: 12/13/2009] [Accepted: 12/14/2009] [Indexed: 11/27/2022] Open
Abstract
This article reviews the substantial impact computational neuroscience has had on neuroimaging over the past years. It builds on the distinction between models of the brain as a computational machine and computational models of neuronal dynamics per se; i.e., models of brain function and biophysics. Both sorts of model borrow heavily from computational neuroscience, and both have enriched the analysis of neuroimaging data and the type of questions we address. To illustrate the role of functional models in imaging neuroscience, we focus on optimal control and decision (game) theory; the models used here provide a mechanistic account of neuronal computations and the latent (mental) states represent by the brain. In terms of biophysical modelling, we focus on dynamic causal modelling, with a special emphasis on recent advances in neural-mass models for hemodynamic and electrophysiological time series. Each example emphasises the role of generative models, which embed our hypotheses or questions, and the importance of model comparison (i.e., hypothesis testing). We will refer to this theme, when trying to contextualise recent trends in relation to each other.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, UK.
| | | |
Collapse
|
234
|
Wang XJ. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev 2010; 90:1195-268. [PMID: 20664082 DOI: 10.1152/physrev.00035.2008] [Citation(s) in RCA: 1234] [Impact Index Per Article: 82.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Synchronous rhythms represent a core mechanism for sculpting temporal coordination of neural activity in the brain-wide network. This review focuses on oscillations in the cerebral cortex that occur during cognition, in alert behaving conditions. Over the last two decades, experimental and modeling work has made great strides in elucidating the detailed cellular and circuit basis of these rhythms, particularly gamma and theta rhythms. The underlying physiological mechanisms are diverse (ranging from resonance and pacemaker properties of single cells to multiple scenarios for population synchronization and wave propagation), but also exhibit unifying principles. A major conceptual advance was the realization that synaptic inhibition plays a fundamental role in rhythmogenesis, either in an interneuronal network or in a reciprocal excitatory-inhibitory loop. Computational functions of synchronous oscillations in cognition are still a matter of debate among systems neuroscientists, in part because the notion of regular oscillation seems to contradict the common observation that spiking discharges of individual neurons in the cortex are highly stochastic and far from being clocklike. However, recent findings have led to a framework that goes beyond the conventional theory of coupled oscillators and reconciles the apparent dichotomy between irregular single neuron activity and field potential oscillations. From this perspective, a plethora of studies will be reviewed on the involvement of long-distance neuronal coherence in cognitive functions such as multisensory integration, working memory, and selective attention. Finally, implications of abnormal neural synchronization are discussed as they relate to mental disorders like schizophrenia and autism.
Collapse
Affiliation(s)
- Xiao-Jing Wang
- Department of Neurobiology and Kavli Institute of Neuroscience, Yale University School of Medicine, New Haven, Connecticut 06520, USA.
| |
Collapse
|
235
|
Abstract
A fundamental goal in vision science is to determine how many neurons in how many areas are required to compute a coherent interpretation of the visual scene. Here I propose six principles of cortical dynamics of visual processing in the first 150 ms following the appearance of a visual stimulus. Fast synaptic communication between neurons depends on the driving neurons and the biophysical history and driving forces of the target neurons. Under these constraints, the retina communicates changes in the field of view driving large populations of neurons in visual areas into a dynamic sequence of feed-forward communication and integration of the inward current of the change signal into the dendrites of higher order area neurons (30-70 ms). Simultaneously an even larger number of neurons within each area receiving feed-forward input are pre-excited to sub-threshold levels. The higher order area neurons communicate the results of their computations as feedback adding inward current to the excited and pre-excited neurons in lower areas. This feedback reconciles computational differences between higher and lower areas (75-120 ms). This brings the lower area neurons into a new dynamic regime characterized by reduced driving forces and sparse firing reflecting the visual areas interpretation of the current scene (140 ms). The population membrane potentials and net-inward/outward currents and firing are well behaved at the mesoscopic scale, such that the decoding in retinotopic cortical space shows the visual areas' interpretation of the current scene. These dynamics have plausible biophysical explanations. The principles are theoretical, predictive, supported by recent experiments and easily lend themselves to experimental tests or computational modeling.
Collapse
Affiliation(s)
- Per E. Roland
- Department of Neuroscience, Division of Brain Research, Karolinska Institutet, StockholmSweden
| |
Collapse
|
236
|
Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum. J Neurosci 2010; 30:5894-911. [PMID: 20427650 DOI: 10.1523/jneurosci.5540-09.2010] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Collapse
|
237
|
Park C, Worth RM, Rubchinsky LL. Fine temporal structure of beta oscillations synchronization in subthalamic nucleus in Parkinson's disease. J Neurophysiol 2010; 103:2707-16. [PMID: 20181734 PMCID: PMC2867579 DOI: 10.1152/jn.00724.2009] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2009] [Accepted: 02/22/2010] [Indexed: 11/22/2022] Open
Abstract
Synchronous oscillatory dynamics in the beta frequency band is a characteristic feature of neuronal activity of basal ganglia in Parkinson's disease and is hypothesized to be related to the disease's hypokinetic symptoms. This study explores the temporal structure of this synchronization during episodes of oscillatory beta-band activity. Phase synchronization (phase locking) between extracellular units and local field potentials (LFPs) from the subthalamic nucleus (STN) of parkinsonian patients is analyzed here at a high temporal resolution. We use methods of nonlinear dynamics theory to construct first-return maps for the phases of oscillations and quantify their dynamics. Synchronous episodes are interrupted by less synchronous episodes in an irregular yet structured manner. We estimate probabilities for different kinds of these "desynchronization events." There is a dominance of relatively frequent yet very brief desynchronization events with the most likely desynchronization lasting for about one cycle of oscillations. The chances of longer desynchronization events decrease with their duration. The observed synchronization may primarily reflect the relationship between synaptic input to STN and somatic/axonal output from STN at rest. The intermittent, transient character of synchrony even on very short time scales may reflect the possibility for the basal ganglia to carry out some informational function even in the parkinsonian state. The dominance of short desynchronization events suggests that even though the synchronization in parkinsonian basal ganglia is fragile enough to be frequently destabilized, it has the ability to reestablish itself very quickly.
Collapse
Affiliation(s)
- Choongseok Park
- Department of Mathematical Sciences, Indiana University Purdue University Indianapolis, Indianapolis, IN 46202, USA
| | | | | |
Collapse
|
238
|
Abstract
A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London, WC1N 3BG, UK.
| |
Collapse
|
239
|
Neural population-level memory traces in the mouse hippocampus. PLoS One 2009; 4:e8256. [PMID: 20016843 PMCID: PMC2788416 DOI: 10.1371/journal.pone.0008256] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2009] [Accepted: 11/19/2009] [Indexed: 11/19/2022] Open
Abstract
One of the fundamental goals in neurosciences is to elucidate the formation and retrieval of brain's associative memory traces in real-time. Here, we describe real-time neural ensemble transient dynamics in the mouse hippocampal CA1 region and demonstrate their relationships with behavioral performances during both learning and recall. We employed the classic trace fear conditioning paradigm involving a neutral tone followed by a mild foot-shock 20 seconds later. Our large-scale recording and decoding methods revealed that conditioned tone responses and tone-shock association patterns were not present in CA1 during the first pairing, but emerged quickly after multiple pairings. These encoding patterns showed increased immediate-replay, correlating tightly with increased immediate-freezing during learning. Moreover, during contextual recall, these patterns reappeared in tandem six-to-fourteen times per minute, again correlating tightly with behavioral recall. Upon traced tone recall, while various fear memories were retrieved, the shock traces exhibited a unique recall-peak around the 20-second trace interval, further signifying the memory of time for the expected shock. Therefore, our study has revealed various real-time associative memory traces during learning and recall in CA1, and demonstrates that real-time memory traces can be decoded on a moment-to-moment basis over any single trial.
Collapse
|
240
|
Bick C, Rabinovich MI. Dynamical origin of the effective storage capacity in the brain's working memory. PHYSICAL REVIEW LETTERS 2009; 103:218101. [PMID: 20366069 DOI: 10.1103/physrevlett.103.218101] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2009] [Indexed: 05/29/2023]
Abstract
The capacity of working memory (WM), a short-term buffer for information in the brain, is limited. We suggest a model for sequential WM that is based upon winnerless competition amongst representations of available informational items. Analytical results for the underlying mathematical model relate WM capacity and relative lateral inhibition in the corresponding neural network. This implies an upper bound for WM capacity, which is, under reasonable neurobiological assumptions, close to the "magical number seven."
Collapse
Affiliation(s)
- Christian Bick
- BioCircuits Institute, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0402, USA.
| | | |
Collapse
|
241
|
Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner. J Neurosci 2009; 29:13172-81. [PMID: 19846705 DOI: 10.1523/jneurosci.2358-09.2009] [Citation(s) in RCA: 75] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Complex neural dynamics produced by the recurrent architecture of neocortical circuits is critical to the cortex's computational power. However, the synaptic learning rules underlying the creation of stable propagation and reproducible neural trajectories within recurrent networks are not understood. Here, we examined synaptic learning rules with the goal of creating recurrent networks in which evoked activity would: (1) propagate throughout the entire network in response to a brief stimulus while avoiding runaway excitation; (2) exhibit spatially and temporally sparse dynamics; and (3) incorporate multiple neural trajectories, i.e., different input patterns should elicit distinct trajectories. We established that an unsupervised learning rule, termed presynaptic-dependent scaling (PSD), can achieve the proposed network dynamics. To quantify the structure of the trained networks, we developed a recurrence index, which revealed that presynaptic-dependent scaling generated a functionally feedforward network when training with a single stimulus. However, training the network with multiple input patterns established that: (1) multiple non-overlapping stable trajectories can be embedded in the network; and (2) the structure of the network became progressively more complex (recurrent) as the number of training patterns increased. In addition, we determined that PSD and spike-timing-dependent plasticity operating in parallel improved the ability of the network to incorporate multiple and less variable trajectories, but also shortened the duration of the neural trajectory. Together, these results establish one of the first learning rules that can embed multiple trajectories, each of which recruits all neurons, within recurrent neural networks in a self-organizing manner.
Collapse
|
242
|
Lazar A, Pipa G, Triesch J. SORN: a self-organizing recurrent neural network. Front Comput Neurosci 2009; 3:23. [PMID: 19893759 PMCID: PMC2773171 DOI: 10.3389/neuro.10.023.2009] [Citation(s) in RCA: 101] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2009] [Accepted: 10/05/2009] [Indexed: 11/13/2022] Open
Abstract
Understanding the dynamics of recurrent neural networks is crucial for explaining how the brain processes information. In the neocortex, a range of different plasticity mechanisms are shaping recurrent networks into effective information processing circuits that learn appropriate representations for time-varying sensory stimuli. However, it has been difficult to mimic these abilities in artificial neural network models. Here we introduce SORN, a self-organizing recurrent network. It combines three distinct forms of local plasticity to learn spatio-temporal patterns in its input while maintaining its dynamics in a healthy regime suitable for learning. The SORN learns to encode information in the form of trajectories through its high-dimensional state space reminiscent of recent biological findings on cortical coding. All three forms of plasticity are shown to be essential for the network's success.
Collapse
Affiliation(s)
- Andreea Lazar
- Frankfurt Institute of Advanced Studies, Johann Wolfgang Goethe University Frankfurt am Main, Germany.
| | | | | |
Collapse
|
243
|
Abstract
The neural basis of olfactory information processing and olfactory percept formation is a topic of intense investigation as new genetic, optical, and psychophysical tools are brought to bear to identify the sites and interaction modes of cortical areas involved in the central processing of olfactory information. New methods for recording cellular interactions and network events in the awake, behaving brain during olfactory processing and odor-based decision making have shown remarkable new properties of neuromodulation and synaptic interactions distinct from those observed in anesthetized brains. Psychophysical, imaging, and computational studies point to the orbitofrontal cortex as the likely locus of odor percept formation in mammals, but further work is needed to identify a causal link between perceptual and neural events in this area.
Collapse
Affiliation(s)
- Alan Gelperin
- Monell Chemical Senses Center, Philadelphia, Pennsylvania 19104, USA.
| | | |
Collapse
|
244
|
Abstract
Odors evoke complex spatiotemporal responses in the insect antennal lobe (AL) and mammalian olfactory bulb. However, the behavioral relevance of spatiotemporal coding remains unclear. In the present work we combined behavioral analyses with calcium imaging of odor induced activity in the honeybee AL to evaluate the relevance of this temporal dimension in the olfactory code. We used a new way for evaluation of odor similarity of binary mixtures in behavioral studies, which involved testing whether a match of odor-sampling time is necessary between training and testing conditions for odor recognition during associative learning. Using graded changes in the similarity of the mixture ratios, we found high correlations between the behavioral generalization across those mixtures and a gradient of activation in AL output. Furthermore, short odor stimuli of 500 ms or less affected how well odors were matched with a memory template, and this time corresponded to a shift from a sampling-time-dependent to a sampling-time-independent memory. Accordingly, 375 ms corresponded to the time required for spatiotemporal AL activity patterns to reach maximal separation according to imaging studies. Finally, we compared spatiotemporal representations of binary mixtures in trained and untrained animals. AL activity was modified by conditioning to improve separation of odor representations. These data suggest that one role of reinforcement is to "tune" the AL such that relevant odors become more discriminable.
Collapse
|
245
|
Friston K, Kiebel S. Predictive coding under the free-energy principle. Philos Trans R Soc Lond B Biol Sci 2009; 364:1211-21. [PMID: 19528002 DOI: 10.1098/rstb.2008.0300] [Citation(s) in RCA: 790] [Impact Index Per Article: 49.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre of Neuroimaging, Institute of Neurology, University College LondonQueen Square, London, UK.
| | | |
Collapse
|
246
|
Kiebel SJ, von Kriegstein K, Daunizeau J, Friston KJ. Recognizing sequences of sequences. PLoS Comput Biol 2009; 5:e1000464. [PMID: 19680429 PMCID: PMC2714976 DOI: 10.1371/journal.pcbi.1000464] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2009] [Accepted: 07/10/2009] [Indexed: 11/17/2022] Open
Abstract
The brain's decoding of fast sensory streams is currently impossible to emulate, even approximately, with artificial agents. For example, robust speech recognition is relatively easy for humans but exceptionally difficult for artificial speech recognition systems. In this paper, we propose that recognition can be simplified with an internal model of how sensory input is generated, when formulated in a Bayesian framework. We show that a plausible candidate for an internal or generative model is a hierarchy of ‘stable heteroclinic channels’. This model describes continuous dynamics in the environment as a hierarchy of sequences, where slower sequences cause faster sequences. Under this model, online recognition corresponds to the dynamic decoding of causal sequences, giving a representation of the environment with predictive power on several timescales. We illustrate the ensuing decoding or recognition scheme using synthetic sequences of syllables, where syllables are sequences of phonemes and phonemes are sequences of sound-wave modulations. By presenting anomalous stimuli, we find that the resulting recognition dynamics disclose inference at multiple time scales and are reminiscent of neuronal dynamics seen in the real brain. Despite tremendous advances in neuroscience, we cannot yet build machines that recognize the world as effortlessly as we do. One reason might be that there are computational approaches to recognition that have not yet been exploited. Here, we demonstrate that the ability to recognize temporal sequences might play an important part. We show that an artificial decoding device can extract natural speech sounds from sound waves if speech is generated as dynamic and transient sequences of sequences. In principle, this means that artificial recognition can be implemented robustly and online using dynamic systems theory and Bayesian inference.
Collapse
|
247
|
Friston K, Kiebel S. Cortical circuits for perceptual inference. Neural Netw 2009; 22:1093-104. [PMID: 19635656 PMCID: PMC2796185 DOI: 10.1016/j.neunet.2009.07.023] [Citation(s) in RCA: 104] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2009] [Revised: 05/14/2009] [Accepted: 07/14/2009] [Indexed: 12/01/2022]
Abstract
This paper assumes that cortical circuits have evolved to enable inference about the causes of sensory input received by the brain. This provides a principled specification of what neural circuits have to achieve. Here, we attempt to address how the brain makes inferences by casting inference as an optimisation problem. We look at how the ensuing recognition dynamics could be supported by directed connections and message-passing among neuronal populations, given our knowledge of intrinsic and extrinsic neuronal connections. We assume that the brain models the world as a dynamic system, which imposes causal structure on the sensorium. Perception is equated with the optimisation or inversion of this internal model, to explain sensory input. Given a model of how sensory data are generated, we use a generic variational approach to model inversion to furnish equations that prescribe recognition; i.e., the dynamics of neuronal activity that represents the causes of sensory input. Here, we focus on a model whose hierarchical and dynamical structure enables simulated brains to recognise and predict sequences of sensory states. We first review these models and their inversion under a variational free-energy formulation. We then show that the brain has the necessary infrastructure to implement this inversion and present stimulations using synthetic birds that generate and recognise birdsongs.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre of Neuroimaging, University College London, Queen Square, London, United Kingdom.
| | | |
Collapse
|
248
|
Siekmeier PJ. Evidence of multistability in a realistic computer simulation of hippocampus subfield CA1. Behav Brain Res 2009; 200:220-31. [PMID: 19378385 DOI: 10.1016/j.bbr.2009.01.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The manner in which hippocampus processes neural signals is thought to be central to the memory encoding process. A theoretically oriented literature has suggested that this is carried out via "attractors" or distinctive spatio-temporal patterns of activity. However, these ideas have not been thoroughly investigated using computational models featuring both realistic single-cell physiology and detailed cell-to-cell connectivity. Here we present a 452 cell simulation based on Traub et al.'s pyramidal cell [Traub RD, Jefferys JG, Miles R, Whittington MA, Toth K. A branching dendritic model of a rodent CA3 pyramidal neurone. J Physiol (Lond) 1994;481:79-95] and interneuron [Traub RD, Miles R, Pyramidal cell-to-inhibitory cell spike transduction explicable by active dendritic conductances in inhibitory cell. J Comput Neurosci 1995;2:291-8] models, incorporating patterns of synaptic connectivity based on an extensive review of the neuroanatomic literature. When stimulated with a one second physiologically realistic input, our simulated tissue shows the ability to hold activity on-line for several seconds; furthermore, its spiking activity, as measured by frequency and interspike interval (ISI) distributions, resembles that of in vivo hippocampus. An interesting emergent property of the system is its tendency to transition from stable state to stable state, a behavior consistent with recent experimental findings [Sasaki T, Matsuki N, Ikegaya Y. Metastability of active CA3 networks. J Neurosci 2007;27:517-28]. Inspection of spike trains and simulated blockade of K(AHP) channels suggest that this is mediated by spike frequency adaptation. This finding, in conjunction with studies showing that apamin, a K(AHP) channel blocker, enhances the memory consolidation process in laboratory animals, suggests the formation of stable attractor states is central to the process by which memories are encoded. Ways that this methodology could shed light on the etiology of mental illness, such as schizophrenia, are discussed.
Collapse
Affiliation(s)
- Peter J Siekmeier
- Harvard Medical School and McLean Hospital, 115 Mill Street, Belmont, MA 02478, USA.
| |
Collapse
|
249
|
Orosz G, Ashwin P, Townley S. Learning of spatio-temporal codes in a coupled oscillator system. ACTA ACUST UNITED AC 2009; 20:1135-47. [PMID: 19482575 DOI: 10.1109/tnn.2009.2016658] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.
Collapse
Affiliation(s)
- Gábor Orosz
- Department of Mechanical Engineering, University of California at Santa Barbara, Santa Barbara, CA 93106, USA.
| | | | | |
Collapse
|
250
|
Muezzinoglu MK, Huerta R, Abarbanel HDI, Ryan MA, Rabinovich MI. Chemosensor-driven artificial antennal lobe transient dynamics enable fast recognition and working memory. Neural Comput 2009; 21:1018-37. [PMID: 19018701 DOI: 10.1162/neco.2008.05-08-780] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The speed and accuracy of odor recognition in insects can hardly be resolved by the raw descriptors provided by olfactory receptors alone due to their slow time constant and high variability. The animal overcomes these barriers by means of the antennal lobe (AL) dynamics, which consolidates the classificatory information in receptor signal with a spatiotemporal code that is enriched in odor sensitivity, particularly in its transient. Inspired by this fact, we propose an easily implementable AL-like network and show that it significantly expedites and enhances the identification of odors from slow and noisy artificial polymer sensor responses. The device owes its efficiency to two intrinsic mechanisms: inhibition (which triggers a competition) and integration (due to the dynamical nature of the network). The former functions as a sharpening filter extracting the features of receptor signal that favor odor separation, whereas the latter implements a working memory by accumulating the extracted features in trajectories. This cooperation boosts the odor specificity during the receptor transient, which is essential for fast odor recognition.
Collapse
Affiliation(s)
- Mehmet K Muezzinoglu
- Institute for Nonlinear Science, University of California, San Diego, La Jolla, CA 92093-0402, U.S.A.
| | | | | | | | | |
Collapse
|