1
|
Penny W. Stochastic attractor models of visual working memory. PLoS One 2024; 19:e0301039. [PMID: 38568927 PMCID: PMC10990203 DOI: 10.1371/journal.pone.0301039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 03/10/2024] [Indexed: 04/05/2024] Open
Abstract
This paper investigates models of working memory in which memory traces evolve according to stochastic attractor dynamics. These models have previously been shown to account for response-biases that are manifest across multiple trials of a visual working memory task. Here we adapt this approach by making the stable fixed points correspond to the multiple items to be remembered within a single-trial, in accordance with standard dynamical perspectives of memory, and find evidence that this multi-item model can provide a better account of behavioural data from continuous-report tasks. Additionally, the multi-item model proposes a simple mechanism by which swap-errors arise: memory traces diffuse away from their initial state and are captured by the attractors of other items. Swap-error curves reveal the evolution of this process as a continuous function of time throughout the maintenance interval and can be inferred from experimental data. Consistent with previous findings, we find that empirical memory performance is not well characterised by a purely-diffusive process but rather by a stochastic process that also embodies error-correcting dynamics.
Collapse
Affiliation(s)
- W. Penny
- School of Psychology, University East Anglia, Norwich, United Kingdom
| |
Collapse
|
2
|
Mitjans AG, Linares DP, Naranjo CL, Gonzalez AA, Li M, Wang Y, Reyes RG, Bringas-Vega ML, Minati L, Evans AC, Valdés-Sosa PA. Accurate and Efficient Simulation of Very High-Dimensional Neural Mass Models with Distributed-Delay Connectome Tensors. Neuroimage 2023; 274:120137. [PMID: 37116767 DOI: 10.1016/j.neuroimage.2023.120137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/16/2023] [Accepted: 04/25/2023] [Indexed: 04/30/2023] Open
Abstract
This paper introduces methods and a novel toolbox that efficiently integrates high-dimensional Neural Mass Models (NMMs) specified by two essential components. The first is the set of nonlinear Random Differential Equations (RDEs) of the dynamics of each neural mass. The second is the highly sparse three-dimensional Connectome Tensor (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection. To date, simplistic assumptions prevail about delays in the CT, often assumed to be Dirac-delta functions. In reality, delays are distributed due to heterogeneous conduction velocities of the axons connecting neural masses. These distributed-delay CTs are challenging to model. Our approach implements these models by leveraging several innovations. Semi-analytical integration of RDEs is done with the Local Linearization (LL) scheme for each neural mass model, ensuring dynamical fidelity to the original continuous-time nonlinear dynamic. This semi-analytic LL integration is highly computationally-efficient. In addition, a tensor representation of the CT facilitates parallel computation. It also seamlessly allows modeling distributed delays CT with any level of complexity or realism. This ease of implementation includes models with distributed-delay CTs. Consequently, our algorithm scales linearly with the number of neural masses and the number of equations they are represented with, contrasting with more traditional methods that scale quadratically at best. To illustrate the toolbox's usefulness, we simulate a single Zetterberg-Jansen-Rit (ZJR) cortical column, a single thalmo-cortical unit, and a toy example comprising 1000 interconnected ZJR columns. These simulations demonstrate the consequences of modifying the CT, especially by introducing distributed delays. The examples illustrate the complexity of explaining EEG oscillations, e.g., split alpha peaks, since they only appear for distinct neural masses. We provide an open-source Script for the toolbox.
Collapse
Affiliation(s)
- Anisleidy González Mitjans
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Department of Mathematics, University of Havana, Havana, Cuba.
| | - Deirel Paz Linares
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Department of Neuroinformatics, Cuban Neuroscience Center, Havana, Cuba.
| | - Carlos López Naranjo
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
| | - Ariosky Areces Gonzalez
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Department of Informatics, University of Pinar del Rio, Pinar del Rio, Cuba.
| | - Min Li
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
| | - Ying Wang
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China.
| | | | - María L Bringas-Vega
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Department of Neuroinformatics, Cuban Neuroscience Center, Havana, Cuba.
| | - Ludovico Minati
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Center for Mind/Brain Sciences (CIMeC), University of Trento, 38100 Trento, Italy.
| | - Alan C Evans
- McGill Centre for Integrative Neuroscience, Ludmer Centre for Neuroinformatics and Mental Health, Montreal Neurological Institute, Canada.
| | - Pedro A Valdés-Sosa
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Department of Neuroinformatics, Cuban Neuroscience Center, Havana, Cuba.
| |
Collapse
|
3
|
Ang CYS, Chiew YS, Wang X, Mat Nor MB, Cove ME, Chase JG. Predicting mechanically ventilated patients future respiratory system elastance - A stochastic modelling approach. Comput Biol Med 2022; 151:106275. [PMID: 36375413 DOI: 10.1016/j.compbiomed.2022.106275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 10/26/2022] [Accepted: 10/30/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND AND OBJECTIVE Respiratory mechanics of mechanically ventilated patients evolve significantly with time, disease state and mechanical ventilation (MV) treatment. Existing deterministic data prediction methods fail to comprehensively describe the multiple sources of heterogeneity of biological systems. This research presents two respiratory mechanics stochastic models with increased prediction accuracy and range, offering improved clinical utility in MV treatment. METHODS Two stochastic models (SM2 and SM3) were developed using retrospective patient respiratory elastance (Ers) from two clinical cohorts which were averaged over time intervals of 10 and 30 min respectively. A stochastic model from a previous study (SM1) was used to benchmark performance. The stochastic models were clinically validated on an independent retrospective clinical cohort of 14 patients. Differences in predictive ability were evaluated using the difference in percentile lines and cumulative distribution density (CDD) curves. RESULTS Clinical validation shows all three models captured more than 98% (median) of future Ers data within the 5th - 95th percentile range. Comparisons of stochastic model percentile lines reported a maximum mean absolute percentage difference of 5.2%. The absolute differences of CDD curves were less than 0.25 in the ranges of 5 < Ers (cmH2O/L) < 85, suggesting similar predictive capabilities within this clinically relevant Ers range. CONCLUSION The new stochastic models significantly improve prediction, clinical utility, and thus feasibility for synchronisation with clinical interventions. Paired with other MV protocols, the stochastic models developed can potentially form part of decision support systems, providing guided, personalised, and safe MV treatment.
Collapse
Affiliation(s)
| | | | - Xin Wang
- School of Engineering, Monash University Malaysia, Selangor, Malaysia
| | - Mohd Basri Mat Nor
- Kulliyah of Medicine, International Islamic University Malaysia, Kuantan, 25200, Malaysia
| | - Matthew E Cove
- Division of Respiratory & Critical Care Medicine, Department of Medicine, National University Health System, Singapore
| | - J Geoffrey Chase
- Center of Bioengineering, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
4
|
Callaham JL, Loiseau JC, Rigas G, Brunton SL. Nonlinear stochastic modelling with Langevin regression. Proc Math Phys Eng Sci 2021; 477:20210092. [PMID: 35153564 PMCID: PMC8299553 DOI: 10.1098/rspa.2021.0092] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 05/04/2021] [Indexed: 12/18/2022] Open
Abstract
Many physical systems characterized by nonlinear multiscale interactions can be modelled by treating unresolved degrees of freedom as random fluctuations. However, even when the microscopic governing equations and qualitative macroscopic behaviour are known, it is often difficult to derive a stochastic model that is consistent with observations. This is especially true for systems such as turbulence where the perturbations do not behave like Gaussian white noise, introducing non-Markovian behaviour to the dynamics. We address these challenges with a framework for identifying interpretable stochastic nonlinear dynamics from experimental data, using forward and adjoint Fokker-Planck equations to enforce statistical consistency. If the form of the Langevin equation is unknown, a simple sparsifying procedure can provide an appropriate functional form. We demonstrate that this method can learn stochastic models in two artificial examples: recovering a nonlinear Langevin equation forced by coloured noise and approximating the second-order dynamics of a particle in a double-well potential with the corresponding first-order bifurcation normal form. Finally, we apply Langevin regression to experimental measurements of a turbulent bluff body wake and show that the statistical behaviour of the centre of pressure can be described by the dynamics of the corresponding laminar flow driven by nonlinear state-dependent noise.
Collapse
Affiliation(s)
- J. L. Callaham
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - J.-C. Loiseau
- Laboratoire DynFluid, Arts et Mètiers ParisTech, 75013 Paris, France
| | - G. Rigas
- Department of Aeronautics, Imperial College London, London SW7 2AZ, UK
| | - S. L. Brunton
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
5
|
Dematties D, Rizzi S, Thiruvathukal GK, Pérez MD, Wainselboim A, Zanutto BS. A Computational Theory for the Emergence of Grammatical Categories in Cortical Dynamics. Front Neural Circuits 2020; 14:12. [PMID: 32372918 PMCID: PMC7179825 DOI: 10.3389/fncir.2020.00012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 03/16/2020] [Indexed: 11/22/2022] Open
Abstract
A general agreement in psycholinguistics claims that syntax and meaning are unified precisely and very quickly during online sentence processing. Although several theories have advanced arguments regarding the neurocomputational bases of this phenomenon, we argue that these theories could potentially benefit by including neurophysiological data concerning cortical dynamics constraints in brain tissue. In addition, some theories promote the integration of complex optimization methods in neural tissue. In this paper we attempt to fill these gaps introducing a computational model inspired in the dynamics of cortical tissue. In our modeling approach, proximal afferent dendrites produce stochastic cellular activations, while distal dendritic branches–on the other hand–contribute independently to somatic depolarization by means of dendritic spikes, and finally, prediction failures produce massive firing events preventing formation of sparse distributed representations. The model presented in this paper combines semantic and coarse-grained syntactic constraints for each word in a sentence context until grammatically related word function discrimination emerges spontaneously by the sole correlation of lexical information from different sources without applying complex optimization methods. By means of support vector machine techniques, we show that the sparse activation features returned by our approach are well suited—bootstrapping from the features returned by Word Embedding mechanisms—to accomplish grammatical function classification of individual words in a sentence. In this way we develop a biologically guided computational explanation for linguistically relevant unification processes in cortex which connects psycholinguistics to neurobiological accounts of language. We also claim that the computational hypotheses established in this research could foster future work on biologically-inspired learning algorithms for natural language processing applications.
Collapse
Affiliation(s)
- Dario Dematties
- Universidad de Buenos Aires, Facultad de Ingeniería, Instituto de Ingeniería Biomédica, Buenos Aires, Argentina
| | - Silvio Rizzi
- Argonne National Laboratory, Lemont, IL, United States
| | - George K Thiruvathukal
- Argonne National Laboratory, Lemont, IL, United States.,Computer Science Department, Loyola University Chicago, Chicago, IL, United States
| | - Mauricio David Pérez
- Microwaves in Medical Engineering Group, Division of Solid-State Electronics, Department of Electrical Engineering, Uppsala University, Uppsala, Sweden
| | - Alejandro Wainselboim
- Centro Científico Tecnológico Conicet Mendoza, Instituto de Ciencias Humanas, Sociales y Ambientales, Mendoza, Argentina
| | - B Silvano Zanutto
- Universidad de Buenos Aires, Facultad de Ingeniería, Instituto de Ingeniería Biomédica, Buenos Aires, Argentina.,Instituto de Biología y Medicina Experimental-CONICET, Buenos Aires, Argentina
| |
Collapse
|
6
|
Mitchell BA, Lauharatanahirun N, Garcia JO, Wymbs N, Grafton S, Vettel JM, Petzold LR. A Minimum Free Energy Model of Motor Learning. Neural Comput 2019; 31:1945-1963. [PMID: 31393824 DOI: 10.1162/neco_a_01219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Even highly trained behaviors demonstrate variability, which is correlated with performance on current and future tasks. An objective of motor learning that is general enough to explain these phenomena has not been precisely formulated. In this six-week longitudinal learning study, participants practiced a set of motor sequences each day, and neuroimaging data were collected on days 1, 14, 28, and 42 to capture the neural correlates of the learning process. In our analysis, we first modeled the underlying neural and behavioral dynamics during learning. Our results demonstrate that the densities of whole-brain response, task-active regional response, and behavioral performance evolve according to a Fokker-Planck equation during the acquisition of a motor skill. We show that this implies that the brain concurrently optimizes the entropy of a joint density over neural response and behavior (as measured by sampling over multiple trials and subjects) and the expected performance under this density; we call this formulation of learning minimum free energy learning (MFEL). This model provides an explanation as to how behavioral variability can be tuned while simultaneously improving performance during learning. We then develop a novel variant of inverse reinforcement learning to retrieve the cost function optimized by the brain during the learning process, as well as the parameter used to tune variability. We show that this population-level analysis can be used to derive a learning objective that each subject optimizes during his or her study. In this way, MFEL effectively acts as a unifying principle, allowing users to precisely formulate learning objectives and infer their structure.
Collapse
Affiliation(s)
- B A Mitchell
- Department of Computer Science, University of California, Santa Barbara, Santa Barbara, CA 931056, U.S.A.
| | - N Lauharatanahirun
- Human Research and Engineering Directorate, The CCDC Army Research Laboratory, Aberdeen Proving Ground, MD 21005, U.S.A., and Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - J O Garcia
- Human Research and Engineering Directorate, The CCDC Army Research Laboratory, Aberdeen Proving Ground, MD 21005, U.S.A., and Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - N Wymbs
- Department of Physical Medicine and Rehabilitation, Johns Hopkins Medical Institution, Baltimore, MD 21205, U.S.A.
| | - S Grafton
- Department of Psychological Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 931056, U.S.A.
| | - J M Vettel
- Department of Psychological Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 931056, U.S.A.; Human Research and Engineering Directorate, The CCDC Army Research Laboratory, Aberdeen Proving Ground, MD 21005, U.S.A.; and Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - L R Petzold
- Department of Computer Science and Department of Mechanical Engineering, University of California, Santa Barbara, Santa Barbara, CA 931056, U.S.A.
| |
Collapse
|
7
|
Gollo LL, Karim M, Harris JA, Morley JW, Breakspear M. Hierarchical and Nonlinear Dynamics in Prefrontal Cortex Regulate the Precision of Perceptual Beliefs. Front Neural Circuits 2019; 13:27. [PMID: 31068794 PMCID: PMC6491505 DOI: 10.3389/fncir.2019.00027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 03/29/2019] [Indexed: 11/13/2022] Open
Abstract
Actions are shaped not only by the content of our percepts but also by our confidence in them. To study the cortical representation of perceptual precision in decision making, we acquired functional imaging data whilst participants performed two vibrotactile forced-choice discrimination tasks: a fast-slow judgment, and a same-different judgment. The first task requires a comparison of the perceived vibrotactile frequencies to decide which one is faster. However, the second task requires that the estimated difference between those frequencies is weighed against the precision of each percept-if both stimuli are very precisely perceived, then any slight difference is more likely to be identified than if the percepts are uncertain. We additionally presented either pure sinusoidal or temporally degraded "noisy" stimuli, whose frequency/period differed slightly from cycle to cycle. In this way, we were able to manipulate the perceptual precision. We report a constellation of cortical regions in the rostral prefrontal cortex (PFC), dorsolateral PFC (DLPFC) and superior frontal gyrus (SFG) associated with the perception of stimulus difference, the presence of stimulus noise and the interaction between these factors. Dynamic causal modeling (DCM) of these data suggested a nonlinear, hierarchical model, whereby activity in the rostral PFC (evoked by the presence of stimulus noise) mutually interacts with activity in the DLPFC (evoked by stimulus differences). This model of effective connectivity outperformed competing models with serial and parallel interactions, hence providing a unique insight into the hierarchical architecture underlying the representation and appraisal of perceptual belief and precision in the PFC.
Collapse
Affiliation(s)
- Leonardo L Gollo
- QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,Centre of Excellence for Integrative Brain Function, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia
| | - Muhsin Karim
- School of Psychiatry, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,The Black Dog Institute, Sydney, NSW, Australia
| | - Justin A Harris
- School of Psychology, The University of Sydney, Sydney, NSW, Australia
| | - John W Morley
- School of Medicine, Western Sydney University, Sydney, NSW, Australia
| | - Michael Breakspear
- QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,Centre of Excellence for Integrative Brain Function, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,School of Psychiatry, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,The Black Dog Institute, Sydney, NSW, Australia.,Metro North Mental Health Service, Brisbane, QLD, Australia.,Hunter Medical Research Institute, University of Newcastle, New Lambton Heights, NSW, Australia
| |
Collapse
|
8
|
Howard N, Hussain A. The Fundamental Code Unit of the Brain: Towards a New Model for Cognitive Geometry. Cognit Comput 2018; 10:426-436. [PMID: 29881471 PMCID: PMC5971038 DOI: 10.1007/s12559-017-9538-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2017] [Accepted: 09/06/2017] [Indexed: 11/29/2022]
Abstract
This paper discusses the problems arising from the multidisciplinary nature of cognitive research and the need to conceptually unify insights from multiple fields into the phenomena that drive cognition. Specifically, the Fundamental Code Unit (FCU) is proposed as a means to better quantify the intelligent thought process at multiple levels of analysis. From the linguistic and behavioral output, FCU produces to the chemical and physical processes within the brain that drive it. The proposed method efficiently model the most complex decision-making process performed by the brain.
Collapse
|
9
|
Dynamic models of large-scale brain activity. Nat Neurosci 2017; 20:340-352. [PMID: 28230845 DOI: 10.1038/nn.4497] [Citation(s) in RCA: 542] [Impact Index Per Article: 67.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2016] [Accepted: 01/06/2017] [Indexed: 12/14/2022]
|
10
|
Jurica P, Gepshtein S, Tyukin I, van Leeuwen C. Sensory optimization by stochastic tuning. Psychol Rev 2014; 120:798-816. [PMID: 24219849 DOI: 10.1037/a0034192] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation.
Collapse
|
11
|
Rigatos GG. Estimation of wave-type dynamics in neurons' membrane with the use of the Derivative-free nonlinear Kalman Filter. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.10.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Venkateswaran N, Sekhar S, Thirupatchur Sanjayasarathy T, Krishnan SN, Kabaleeswaran DK, Ramanathan S, Narayanasamy N, Jagathrakshakan SS, Vignesh SR. Energetics based spike generation of a single neuron: simulation results and analysis. FRONTIERS IN NEUROENERGETICS 2012; 4:2. [PMID: 22347180 PMCID: PMC3269776 DOI: 10.3389/fnene.2012.00002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2011] [Accepted: 01/12/2012] [Indexed: 11/13/2022]
Abstract
Existing current based models that capture spike activity, though useful in studying information processing capabilities of neurons, fail to throw light on their internal functioning. It is imperative to develop a model that captures the spike train of a neuron as a function of its intracellular parameters for non-invasive diagnosis of diseased neurons. This is the first ever article to present such an integrated model that quantifies the inter-dependency between spike activity and intracellular energetics. The generated spike trains from our integrated model will throw greater light on the intracellular energetics than existing current models. Now, an abnormality in the spike of a diseased neuron can be linked and hence effectively analyzed at the energetics level. The spectral analysis of the generated spike trains in a time–frequency domain will help identify abnormalities in the internals of a neuron. As a case study, the parameters of our model are tuned for Alzheimer’s disease and its resultant spike trains are studied and presented. This massive initiative ultimately aims to encompass the entire molecular signaling pathways of the neuronal bioenergetics linking it to the voltage spike initiation and propagation; due to the lack of experimental data quantifying the inter dependencies among the parameters, the model at this stage adopts a particular level of functionality and is shown as an approach to study and perform disease modeling at the spike train and the mitochondrial bioenergetics level.
Collapse
|
13
|
Spiegler A, Knösche TR, Schwab K, Haueisen J, Atay FM. Modeling brain resonance phenomena using a neural mass model. PLoS Comput Biol 2011; 7:e1002298. [PMID: 22215992 PMCID: PMC3245303 DOI: 10.1371/journal.pcbi.1002298] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2011] [Accepted: 10/25/2011] [Indexed: 11/22/2022] Open
Abstract
Stimulation with rhythmic light flicker (photic driving) plays an important role in the diagnosis of schizophrenia, mood disorder, migraine, and epilepsy. In particular, the adjustment of spontaneous brain rhythms to the stimulus frequency (entrainment) is used to assess the functional flexibility of the brain. We aim to gain deeper understanding of the mechanisms underlying this technique and to predict the effects of stimulus frequency and intensity. For this purpose, a modified Jansen and Rit neural mass model (NMM) of a cortical circuit is used. This mean field model has been designed to strike a balance between mathematical simplicity and biological plausibility. We reproduced the entrainment phenomenon observed in EEG during a photic driving experiment. More generally, we demonstrate that such a single area model can already yield very complex dynamics, including chaos, for biologically plausible parameter ranges. We chart the entire parameter space by means of characteristic Lyapunov spectra and Kaplan-Yorke dimension as well as time series and power spectra. Rhythmic and chaotic brain states were found virtually next to each other, such that small parameter changes can give rise to switching from one to another. Strikingly, this characteristic pattern of unpredictability generated by the model was matched to the experimental data with reasonable accuracy. These findings confirm that the NMM is a useful model of brain dynamics during photic driving. In this context, it can be used to study the mechanisms of, for example, perception and epileptic seizure generation. In particular, it enabled us to make predictions regarding the stimulus amplitude in further experiments for improving the entrainment effect. Neuroscience aims to understand the enormously complex function of the normal and diseased brain. This, in turn, is the key to explaining human behavior and to developing novel diagnostic and therapeutic procedures. We develop and use models of mean activity in a single brain area, which provide a balance between tractability and plausibility. We use such a model to explain the resonance phenomenon in a photic driving experiment, which is routinely applied in the diagnosis of various diseases including epilepsy, migraine, schizophrenia and depression. Based on the model, we make predictions on the outcome of similar resonance experiments with periodic stimulation of the patients or participants. Our results are important for researchers and clinicians analyzing brain or behavioral data following periodic input.
Collapse
Affiliation(s)
- Andreas Spiegler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | | | | | | | | |
Collapse
|
14
|
Dynamic causal modelling: A critical review of the biophysical and statistical foundations. Neuroimage 2011; 58:312-22. [DOI: 10.1016/j.neuroimage.2009.11.062] [Citation(s) in RCA: 229] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2009] [Revised: 11/13/2009] [Accepted: 11/23/2009] [Indexed: 02/01/2023] Open
|
15
|
Shaposhnyk V, Villa AEP. Reciprocal projections in hierarchically organized evolvable neural circuits affect EEG-like signals. Brain Res 2011; 1434:266-76. [PMID: 21890119 DOI: 10.1016/j.brainres.2011.08.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2011] [Revised: 07/20/2011] [Accepted: 08/08/2011] [Indexed: 11/26/2022]
Abstract
Modular architecture is a hallmark of many brain circuits. In the cerebral cortex, in particular, it has been observed that reciprocal connections are often present between functionally interconnected areas that are hierarchically organized. We investigate the effect of reciprocal connections in a network of modules of simulated spiking neurons. The neural activity is recorded by means of virtual electrodes and EEG-like signals, called electrochipograms (EChG), analyzed by time- and frequency-domain methods. A major feature of our approach is the implementation of important bio-inspired processes that affect the connectivity within a neural module: synaptogenesis, cell death, spike-timing-dependent plasticity and synaptic pruning. These bio-inspired processes drive the build-up of auto-associative links within each module, which generate an areal activity, recorded by EChG, that reflect the changes in the corresponding functional connectivity within and between neuronal modules. We found that circuits with intra-layer reciprocal projections exhibited enhanced stimulus-locked response. We show evidence that all networks of modules are able to process and maintain patterns of activity associated with the stimulus after its offset. The presence of feedback and horizontal projections was necessary to evoke cross-layer coherence in bursts of -frequency at regular intervals. These findings bring new insights to the understanding of the relation between the functional organization of neural circuits and the electrophysiological signals generated by large cell assemblies. This article is part of a Special Issue entitled "Neural Coding".
Collapse
Affiliation(s)
- Vladyslav Shaposhnyk
- Neuroheuristic Research Group, Information Science Inst., Univ. of Lausanne, Switzerland
| | | |
Collapse
|
16
|
Rosenbaum R, Marpeau F, Ma J, Barua A, Josić K. Finite volume and asymptotic methods for stochastic neuron models with correlated inputs. J Math Biol 2011; 65:1-34. [PMID: 21717104 DOI: 10.1007/s00285-011-0451-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Revised: 06/07/2011] [Indexed: 11/29/2022]
Abstract
We consider a pair of stochastic integrate and fire neurons receiving correlated stochastic inputs. The evolution of this system can be described by the corresponding Fokker-Planck equation with non-trivial boundary conditions resulting from the refractory period and firing threshold. We propose a finite volume method that is orders of magnitude faster than the Monte Carlo methods traditionally used to model such systems. The resulting numerical approximations are proved to be accurate, nonnegative and integrate to 1. We also approximate the transient evolution of the system using an Ornstein-Uhlenbeck process, and use the result to examine the properties of the joint output of cell pairs. The results suggests that the joint output of a cell pair is most sensitive to changes in input variance, and less sensitive to changes in input mean and correlation.
Collapse
|
17
|
Raman K. A stochastic differential equation analysis of cerebrospinal fluid dynamics. Fluids Barriers CNS 2011; 8:9. [PMID: 21349157 PMCID: PMC3042983 DOI: 10.1186/2045-8118-8-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2010] [Accepted: 01/18/2011] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Clinical measurements of intracranial pressure (ICP) over time show fluctuations around the deterministic time path predicted by a classic mathematical model in hydrocephalus research. Thus an important issue in mathematical research on hydrocephalus remains unaddressed--modeling the effect of noise on CSF dynamics. Our objective is to mathematically model the noise in the data. METHODS The classic model relating the temporal evolution of ICP in pressure-volume studies to infusions is a nonlinear differential equation based on natural physical analogies between CSF dynamics and an electrical circuit. Brownian motion was incorporated into the differential equation describing CSF dynamics to obtain a nonlinear stochastic differential equation (SDE) that accommodates the fluctuations in ICP. RESULTS The SDE is explicitly solved and the dynamic probabilities of exceeding critical levels of ICP under different clinical conditions are computed. A key finding is that the probabilities display strong threshold effects with respect to noise. Above the noise threshold, the probabilities are significantly influenced by the resistance to CSF outflow and the intensity of the noise. CONCLUSIONS Fluctuations in the CSF formation rate increase fluctuations in the ICP and they should be minimized to lower the patient's risk. The nonlinear SDE provides a scientific methodology for dynamic risk management of patients. The dynamic output of the SDE matches the noisy ICP data generated by the actual intracranial dynamics of patients better than the classic model used in prior research.
Collapse
Affiliation(s)
- Kalyan Raman
- Medill IMC Department, Northwestern University, 1870 Campus Drive, Third Floor, Evanston, IL 60208, USA.
| |
Collapse
|
18
|
Chandrasekar VK, Sheeba JH, Lakshmanan M. Mass synchronization: occurrence and its control with possible applications to brain dynamics. CHAOS (WOODBURY, N.Y.) 2010; 20:045106. [PMID: 21198118 DOI: 10.1063/1.3527993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Occurrence of strong or mass synchronization of a large number of neuronal populations in the brain characterizes its pathological states. In order to establish an understanding of the mechanism underlying such pathological synchronization, we present a model of coupled populations of phase oscillators representing the interacting neuronal populations. Through numerical analysis, we discuss the occurrence of mass synchronization in the model, where a source population which gets strongly synchronized drives the target populations onto mass synchronization. We hypothesize and identify a possible cause for the occurrence of such a synchronization, which is so far unknown: Pathological synchronization is caused not just because of the increase in the strength of coupling between the populations but also because of the strength of the strong synchronization of the drive population. We propose a demand controlled method to control this pathological synchronization by providing a delayed feedback where the strength and frequency of the synchronization determine the strength and the time delay of the feedback. We provide an analytical explanation for the occurrence of pathological synchronization and its control in the thermodynamic limit.
Collapse
Affiliation(s)
- V K Chandrasekar
- Centre for Nonlinear Dynamics, School of Physics, Bharathidasan University, Tiruchirappalli, Tamilnadu 620 024, India.
| | | | | |
Collapse
|
19
|
Stochastic Processes and Neuronal Modelling: Quantum Harmonic Oscillator Dynamics in Neural Structures. Neural Process Lett 2010. [DOI: 10.1007/s11063-010-9151-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
20
|
Friston KJ, Dolan RJ. Computational and dynamic models in neuroimaging. Neuroimage 2010; 52:752-65. [PMID: 20036335 PMCID: PMC2910283 DOI: 10.1016/j.neuroimage.2009.12.068] [Citation(s) in RCA: 108] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2009] [Revised: 12/13/2009] [Accepted: 12/14/2009] [Indexed: 11/27/2022] Open
Abstract
This article reviews the substantial impact computational neuroscience has had on neuroimaging over the past years. It builds on the distinction between models of the brain as a computational machine and computational models of neuronal dynamics per se; i.e., models of brain function and biophysics. Both sorts of model borrow heavily from computational neuroscience, and both have enriched the analysis of neuroimaging data and the type of questions we address. To illustrate the role of functional models in imaging neuroscience, we focus on optimal control and decision (game) theory; the models used here provide a mechanistic account of neuronal computations and the latent (mental) states represent by the brain. In terms of biophysical modelling, we focus on dynamic causal modelling, with a special emphasis on recent advances in neural-mass models for hemodynamic and electrophysiological time series. Each example emphasises the role of generative models, which embed our hypotheses or questions, and the importance of model comparison (i.e., hypothesis testing). We will refer to this theme, when trying to contextualise recent trends in relation to each other.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, UK.
| | | |
Collapse
|
21
|
Lu W, Rossoni E, Feng J. On a Gaussian neuronal field model. Neuroimage 2010; 52:913-33. [DOI: 10.1016/j.neuroimage.2010.02.075] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2009] [Revised: 02/09/2010] [Accepted: 02/26/2010] [Indexed: 10/19/2022] Open
|
22
|
Abstract
Excitatory synapses are located in confined chemical spaces called the dendritic spines. These are atypical femtoliter-order microdomains where the behavior of even single molecules may have important biological consequences. Powerful chemical biological techniques have now been developed to decipher the dynamic stability of the synapses and to further interrogate the complex properties of neuronal circuits.
Collapse
Affiliation(s)
- Haruhiko Bito
- Department of Neurochemistry at University of Tokyo Graduate School of Medicine, Tokyo, Japan.
| |
Collapse
|
23
|
Marreiros AC, Kiebel SJ, Friston KJ. A dynamic causal model study of neuronal population dynamics. Neuroimage 2010; 51:91-101. [PMID: 20132892 PMCID: PMC3221045 DOI: 10.1016/j.neuroimage.2010.01.098] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2009] [Accepted: 01/27/2010] [Indexed: 11/16/2022] Open
Abstract
In this paper, we compare mean-field and neural-mass models of electrophysiological responses using Bayesian model comparison. In previous work, we presented a mean-field model of neuronal dynamics as observed with magnetoencephalography and electroencephalography. Unlike neural-mass models, which consider only the mean activity of neuronal populations, mean-field models track the distribution (e.g., mean and dispersion) of population activity. This can be important if the mean affects the dispersion or vice versa. Here, we introduce a dynamical causal model based on mean-field (i.e., population density) models of neuronal activity, and use it to assess the evidence for a coupling between the mean and dispersion of hidden neuronal states using observed electromagnetic responses. We used Bayesian model comparison to compare homologous mean-field and neural-mass models, asking whether empirical responses support a role for population variance in shaping neuronal dynamics. We used the mismatch negativity (MMN) and somatosensory evoked potentials (SEP) as representative neuronal responses in physiological and non-physiological paradigms respectively. Our main conclusion was that although neural-mass models may be sufficient for cognitive paradigms, there is clear evidence for an effect of dispersion at the high levels of depolarization evoked in SEP paradigms. This suggests that (i) the dispersion of neuronal states within populations generating evoked brain signals can be manifest in observed brain signals and that (ii) the evidence for their effects can be accessed with dynamic causal model comparison.
Collapse
Affiliation(s)
- André C Marreiros
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UCL, 12 Queen Square, London, UK WC1N 3BG, UK.
| | | | | |
Collapse
|
24
|
Stevenson IH, Rebesco JM, Miller LE, Körding KP. Inferring functional connections between neurons. Curr Opin Neurobiol 2008; 18:582-8. [PMID: 19081241 DOI: 10.1016/j.conb.2008.11.005] [Citation(s) in RCA: 93] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2008] [Revised: 11/12/2008] [Accepted: 11/13/2008] [Indexed: 11/16/2022]
Abstract
A central question in neuroscience is how interactions between neurons give rise to behavior. In many electrophysiological experiments, the activity of a set of neurons is recorded while sensory stimuli or movement tasks are varied. Tools that aim to reveal underlying interactions between neurons from such data can be extremely useful. Traditionally, neuroscientists have studied these interactions using purely descriptive statistics (cross-correlograms or joint peri-stimulus time histograms). However, the interpretation of such data is often difficult, particularly as the number of recorded neurons grows. Recent research suggests that model-based, maximum likelihood methods can improve these analyses. In addition to estimating neural interactions, application of these techniques has improved decoding of external variables, created novel interpretations of existing electrophysiological data, and may provide new insight into how the brain represents information.
Collapse
Affiliation(s)
- Ian H Stevenson
- Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | | | | | | |
Collapse
|
25
|
Exadaktylos AK, Evangelopoulos DS, Wullschleger M, Bürki L, Zimmermann H. Strategic emergency department design: An approach to capacity planning in healthcare provision in overcrowded emergency rooms. J Trauma Manag Outcomes 2008; 2:11. [PMID: 19014621 PMCID: PMC2596780 DOI: 10.1186/1752-2897-2-11] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2008] [Accepted: 11/17/2008] [Indexed: 11/22/2022]
Abstract
Healthcare professionals and the public have increasing concerns about the ability of emergency departments to meet current demands. Increased demand for emergency services, mainly caused by a growing number of minor and moderate injuries has reached crisis proportions, especially in the United Kingdom. Numerous efforts have been made to explore the complex causes because it is becoming more and more important to provide adequate healthcare within tight budgets. Optimisation of patient pathways in the emergency department is therefore an important factor. This paper explores the possibilities offered by dynamic simulation tools to improve patient pathways using the emergency department of a busy university teaching hospital in Switzerland as an example.
Collapse
|
26
|
Marreiros AC, Kiebel SJ, Daunizeau J, Harrison LM, Friston KJ. Population dynamics under the Laplace assumption. Neuroimage 2008; 44:701-14. [PMID: 19013532 DOI: 10.1016/j.neuroimage.2008.10.008] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2008] [Revised: 09/30/2008] [Accepted: 10/10/2008] [Indexed: 11/30/2022] Open
Abstract
In this paper, we describe a generic approach to modelling dynamics in neuronal populations. This approach models a full density on the states of neuronal populations but finesses this high-dimensional problem by re-formulating density dynamics in terms of ordinary differential equations on the sufficient statistics of the densities considered (c.f., the method of moments). The particular form for the population density we adopt is a Gaussian density (c.f., the Laplace assumption). This means population dynamics are described by equations governing the evolution of the population's mean and covariance. We derive these equations from the Fokker-Planck formalism and illustrate their application to a conductance-based model of neuronal exchanges. One interesting aspect of this formulation is that we can uncouple the mean and covariance to furnish a neural-mass model, which rests only on the populations mean. This enables us to compare equivalent mean-field and neural-mass models of the same populations and evaluate, quantitatively, the contribution of population variance to the expected dynamics. The mean-field model presented here will form the basis of a dynamic causal model of observed electromagnetic signals in future work.
Collapse
Affiliation(s)
- André C Marreiros
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.
| | | | | | | | | |
Collapse
|
27
|
|
28
|
Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol 2008; 4:e1000092. [PMID: 18769680 PMCID: PMC2519166 DOI: 10.1371/journal.pcbi.1000092] [Citation(s) in RCA: 622] [Impact Index Per Article: 36.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences.
Collapse
Affiliation(s)
- Gustavo Deco
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Department of Technology, Computational Neuroscience, Barcelona, Spain.
| | | | | | | | | |
Collapse
|
29
|
Population dynamics: variance and the sigmoid activation function. Neuroimage 2008; 42:147-57. [PMID: 18547818 DOI: 10.1016/j.neuroimage.2008.04.239] [Citation(s) in RCA: 105] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2008] [Revised: 04/08/2008] [Accepted: 04/16/2008] [Indexed: 11/27/2022] Open
Abstract
This paper demonstrates how the sigmoid activation function of neural-mass models can be understood in terms of the variance or dispersion of neuronal states. We use this relationship to estimate the probability density on hidden neuronal states, using non-invasive electrophysiological (EEG) measures and dynamic casual modelling. The importance of implicit variance in neuronal states for neural-mass models of cortical dynamics is illustrated using both synthetic data and real EEG measurements of sensory evoked responses.
Collapse
|
30
|
Stephan KE, Riera JJ, Deco G, Horwitz B. The Brain Connectivity Workshops: moving the frontiers of computational systems neuroscience. Neuroimage 2008; 42:1-9. [PMID: 18511300 DOI: 10.1016/j.neuroimage.2008.04.167] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2008] [Revised: 04/03/2008] [Accepted: 04/11/2008] [Indexed: 11/30/2022] Open
Abstract
Understanding the link between neurobiology and cognition requires that neuroscience moves beyond mere structure-function correlations. An explicit systems perspective is needed in which putative mechanisms of how brain function is constrained by brain structure are mathematically formalized and made accessible for experimental investigation. Such a systems approach critically rests on a better understanding of brain connectivity in its various forms. Since 2002, frontier topics of connectivity and neural system analysis have been discussed in a multidisciplinary annual meeting, the Brain Connectivity Workshop (BCW), bringing together experimentalists and theorists from various fields. This article summarizes some of the main discussions at the two most recent workshops, 2006 at Sendai, Japan, and 2007 at Barcelona, Spain: (i) investigation of cortical micro- and macrocircuits, (ii) models of neural dynamics at multiple scales, (iii) analysis of "resting state" networks, and (iv) linking anatomical to functional connectivity. Finally, we outline some central challenges and research trajectories in computational systems neuroscience for the next years.
Collapse
Affiliation(s)
- Klaas Enno Stephan
- Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N3BG, UK.
| | | | | | | |
Collapse
|
31
|
Chizhov AV, Graham LJ. Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2008; 77:011910. [PMID: 18351879 DOI: 10.1103/physreve.77.011910] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2007] [Revised: 10/20/2007] [Indexed: 05/26/2023]
Abstract
The expected firing probability of a stochastic neuron is approximated by a function of the expected subthreshold membrane potential, for the case of colored noise. We propose this approximation in order to extend the recently proposed white noise model [A. V. Chizhov and L. J. Graham, Phys. Rev. E 75, 011924 (2007)] to the case of colored noise, applying a refractory density approach to conductance-based neurons. The uncoupled neurons of a single population receive a common input and are dispersed by the noise. Within the framework of the model the effect of noise is expressed by the so-called hazard function, which is the probability density for a single neuron to fire given the average membrane potential in the presence of a noise term. To derive the hazard function we solve the Kolmogorov-Fokker-Planck equation for a mean voltage-driven neuron fluctuating due to colored noisy current. We show that a sum of both a self-similar solution for the case of slow changing mean voltage and a frozen stationary solution for fast changing mean voltage gives a satisfactory approximation for the hazard function in the arbitrary case. We demonstrate the quantitative effect of a temporal correlation of noisy input on the neuron dynamics in the case of leaky integrate-and-fire and detailed conductance-based neurons in response to an injected current step.
Collapse
Affiliation(s)
- Anton V Chizhov
- A.F. Ioffe Physico-Technical Institute of RAS, 26 Politekhnicheskaya Street, 194021 St. Petersburg, Russia.
| | | |
Collapse
|
32
|
Daunizeau J, Friston KJ. A mesostate-space model for EEG and MEG. Neuroimage 2007; 38:67-81. [PMID: 17761440 DOI: 10.1016/j.neuroimage.2007.06.034] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2006] [Revised: 06/20/2007] [Accepted: 06/25/2007] [Indexed: 11/27/2022] Open
Abstract
We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.
Collapse
Affiliation(s)
- Jean Daunizeau
- The Wellcome Deparment of Imaging Neuroscience, Institute of Neurology, UCL, 12 Queen Square, London, UK.
| | | |
Collapse
|
33
|
Stephan KE, Harrison LM, Kiebel SJ, David O, Penny WD, Friston KJ. Dynamic causal models of neural system dynamics:current state and future extensions. J Biosci 2007; 32:129-44. [PMID: 17426386 PMCID: PMC2636905 DOI: 10.1007/s12038-007-0012-5] [Citation(s) in RCA: 152] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Complex processes resulting from interaction of multiple elements can rarely be understood by analytical scientific approaches alone; additional, mathematical models of system dynamics are required. This insight, which disciplines like physics have embraced for a long time already, is gradually gaining importance in the study of cognitive processes by functional neuroimaging. In this field, causal mechanisms in neural systems are described in terms of effective connectivity. Recently, dynamic causal modelling (DCM) was introduced as a generic method to estimate effective connectivity from neuroimaging data in a Bayesian fashion. One of the key advantages of DCM over previous methods is that it distinguishes between neural state equations and modality-specific forward models that translate neural activity into a measured signal. Another strength is its natural relation to Bayesian model selection (BMS) procedures. In this article, we review the conceptual and mathematical basis of DCM and its implementation for functional magnetic resonance imaging data and event-related potentials. After introducing the application of BMS in the context of DCM, we conclude with an outlook to future extensions of DCM. These extensions are guided by the long-term goal of using dynamic system models for pharmacological and clinical applications, particularly with regard to synaptic plasticity.
Collapse
Affiliation(s)
- Klaas E Stephan
- Wellcome Department of Imaging Neuroscience, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | | | | | | | | | | |
Collapse
|
34
|
|
35
|
Friston K, Mattout J, Trujillo-Barreto N, Ashburner J, Penny W. Variational free energy and the Laplace approximation. Neuroimage 2006; 34:220-34. [PMID: 17055746 DOI: 10.1016/j.neuroimage.2006.08.035] [Citation(s) in RCA: 539] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2006] [Revised: 07/19/2006] [Accepted: 08/16/2006] [Indexed: 11/20/2022] Open
Abstract
This note derives the variational free energy under the Laplace approximation, with a focus on accounting for additional model complexity induced by increasing the number of model parameters. This is relevant when using the free energy as an approximation to the log-evidence in Bayesian model averaging and selection. By setting restricted maximum likelihood (ReML) in the larger context of variational learning and expectation maximisation (EM), we show how the ReML objective function can be adjusted to provide an approximation to the log-evidence for a particular model. This means ReML can be used for model selection, specifically to select or compare models with different covariance components. This is useful in the context of hierarchical models because it enables a principled selection of priors that, under simple hyperpriors, can be used for automatic model selection and relevance determination (ARD). Deriving the ReML objective function, from basic variational principles, discloses the simple relationships among Variational Bayes, EM and ReML. Furthermore, we show that EM is formally identical to a full variational treatment when the precisions are linear in the hyperparameters. Finally, we also consider, briefly, dynamic models and how these inform the regularisation of free energy ascent schemes, like EM and ReML.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Department of Imaging Neuroscience, Institute of Neurology, UCL, 12 Queen Square, London, WC1N 3BG, UK.
| | | | | | | | | |
Collapse
|
36
|
Abstract
Inferences about brain function, using functional neuroimaging data, require models of how the data were caused. A variety of models are used in practice that range from conceptual models of functional anatomy to nonlinear mathematical models of hemodynamic responses (e.g. as measured by functional magnetic resonance imaging, fMRI) and neuronal responses. In this review, we discuss the most important models used to analyse functional imaging data and demonstrate how they are interrelated. Initially, we briefly review the anatomical foundations of current theories of brain function on which all mathematical models rest. We then introduce some basic statistical models (e.g. the general linear model) used for making classical (i.e. frequentist) and Bayesian inferences about where neuronal responses are expressed. The more challenging question, how these responses are caused, is addressed by models that incorporate biophysical constraints (e.g. forward models from the neural to the hemodynamic level) and/or consider causal interactions between several regions, i.e. models of effective connectivity. Some of the most refined models to date are neuronal mass models of electroencephalographic (EEG) responses. These models enable mechanistic inferences about how evoked responses are caused, at the level of neuronal subpopulations and the coupling among them.
Collapse
Affiliation(s)
- Klaas Enno Stephan
- The Wellcome Dept. of Cognitive Neurology, University College London Queen Square, London, UK WC1N 3BG
| | - Jeremie Mattout
- The Wellcome Dept. of Cognitive Neurology, University College London Queen Square, London, UK WC1N 3BG
| | - Olivier David
- The Wellcome Dept. of Cognitive Neurology, University College London Queen Square, London, UK WC1N 3BG
| | - Karl J. Friston
- The Wellcome Dept. of Cognitive Neurology, University College London Queen Square, London, UK WC1N 3BG
| |
Collapse
|
37
|
Valdés-Sosa PA, Kötter R, Friston KJ. Introduction: multimodal neuroimaging of brain connectivity. Philos Trans R Soc Lond B Biol Sci 2005; 360:865-7. [PMID: 16087431 PMCID: PMC1854938 DOI: 10.1098/rstb.2005.1655] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Pedro A Valdés-Sosa
- Cuban Neuroscience Centre, , Avenue 25 No. 15202 esquina 158, Cubanacan, Playa, PO Box 6412/6414, Area Code 11600, Ciudad Habana, Cuba.
| | | | | |
Collapse
|
38
|
Breakspear M, Stam CJ. Dynamics of a neural system with a multiscale architecture. Philos Trans R Soc Lond B Biol Sci 2005; 360:1051-74. [PMID: 16087448 PMCID: PMC1854927 DOI: 10.1098/rstb.2005.1643] [Citation(s) in RCA: 116] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales-neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are 'slaved' to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested.
Collapse
Affiliation(s)
- Michael Breakspear
- The Black Dog Institute, Prince of Wales Hospital and School of Psychiatry, University of New South Wales, Randwick, NSW 2031, Australia.
| | | |
Collapse
|
39
|
Abstract
When engaged by a stimulus, different nodes of a neural circuit respond in a coordinated fashion. We often ask whether there is a cause and effect in such interregional interactions. This paper proposes that we can infer causality in functional connectivity by employing a 'perturb and measure' approach. In the human brain, this has been achieved by combining transcranial magnetic stimulation (TMS) with positron emission tomography (PET), functional magnetic resonance imaging or electroencephalography. Here, I will illustrate this approach by reviewing some of our TMS/PET work, and will conclude by discussing a few methodological and theoretical challenges facing those studying neural connectivity using a perturbation.
Collapse
Affiliation(s)
- Tomás Paus
- Brain & Body Centre, University of Nottingham, UK.
| |
Collapse
|
40
|
Tass PA. Estimation of the transmission time of stimulus-locked responses: modelling and stochastic phase resetting analysis. Philos Trans R Soc Lond B Biol Sci 2005; 360:995-9. [PMID: 16087443 PMCID: PMC1854919 DOI: 10.1098/rstb.2005.1635] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
A model of two coupled phase oscillators is studied, where both oscillators are subject to random forces but only one oscillator is repetitively stimulated with a pulsatile stimulus. A pulse causes a reset, which is transmitted to the other oscillator via the coupling. The transmission time of the cross-trial (CT) averaged responses, i.e. the difference in time between the maxima of the CT averaged responses of both oscillators differs from the time difference between the maxima of the oscillators' resets. In fact, the transmission time of the CT averaged responses directly corresponds to the phase difference in the stable synchronized state with integer multiples of the oscillators' mean period added to it. With CT averaged responses it is impossible to reliably estimate the time elapsing, owing to the stimulus' action being transmitted between the two oscillators.
Collapse
Affiliation(s)
- Peter A Tass
- Institute of Medicine and Virtual Institute of Medicine, Research Centre Jülich52425 Jülich, Germany
- Department of Stereotaxic and Functional Neurosurgery, University Hospital50924 Cologne, Germany
- Brain Imaging Centre West52425 Jülich, Germany
| |
Collapse
|