1
|
Tran The J, Ansermet JP, Magistretti PJ, Ansermet F. Hyperactivity of the default mode network in schizophrenia and free energy: A dialogue between Freudian theory of psychosis and neuroscience. Front Hum Neurosci 2022; 16:956831. [PMID: 36590059 PMCID: PMC9795812 DOI: 10.3389/fnhum.2022.956831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
The economic conceptualization of Freudian metapsychology, based on an energetics model of the psyche's workings, offers remarkable commonalities with some recent discoveries in neuroscience, notably in the field of neuroenergetics. The pattern of cerebral activity at resting state and the identification of a default mode network (DMN), a network of areas whose activity is detectable at baseline conditions by neuroimaging techniques, offers a promising field of research in the dialogue between psychoanalysis and neuroscience. In this article we study one significant clinical application of this interdisciplinary dialogue by looking at the role of the DMN in the psychopathology of schizophrenia. Anomalies in the functioning of the DMN have been observed in schizophrenia. Studies have evidenced the existence of hyperactivity in this network in schizophrenia patients, particularly among those for whom a positive symptomatology is dominant. These data are particularly interesting when considered from the perspective of the psychoanalytic understanding of the positive symptoms of psychosis, most notably the Freudian hypothesis of delusions as an "attempt at recovery." Combining the data from research in neuroimaging of schizophrenia patients with the Freudian hypothesis, we propose considering the hyperactivity of the DMN as a consequence of a process of massive reassociation of traces occurring in schizophrenia. This is a process that may constitute an attempt at minimizing the excess of free energy present in psychosis. Modern models of active inference and the free energy principle (FEP) may shed some light on these processes.
Collapse
Affiliation(s)
- Jessica Tran The
- INSERM U1077 Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France,Centre Hospitalier Universitaire de Caen, Caen, France,Université de Caen Normandie, Caen, France,Ecole Pratique des Hautes Etudes, Université Paris Sciences et Lettres, Paris, France,Agalma Foundation Geneva, Geneva, Switzerland,Cyceron, Caen, France,*Correspondence: Jessica Tran The
| | | | - Pierre J. Magistretti
- Agalma Foundation Geneva, Geneva, Switzerland,Division of Biological and Environmental Sciences and Engineering (BESE), King Abdullah University of Science and Technology, Thuwal, Saudi Arabia,Brain Mind Institute, Swiss Federal Institute of Technology Lausanne, Lausanne, Switzerland
| | - Francois Ansermet
- Agalma Foundation Geneva, Geneva, Switzerland,Département de Psychiatrie, Faculté de Médecine, Université de Genève, Geneva, Switzerland
| |
Collapse
|
2
|
Abstract
Active inference offers a first principle account of sentient behavior, from which special and important cases-for example, reinforcement learning, active learning, Bayes optimal inference, Bayes optimal design-can be derived. Active inference finesses the exploitation-exploration dilemma in relation to prior preferences by placing information gain on the same footing as reward or value. In brief, active inference replaces value functions with functionals of (Bayesian) beliefs, in the form of an expected (variational) free energy. In this letter, we consider a sophisticated kind of active inference using a recursive form of expected free energy. Sophistication describes the degree to which an agent has beliefs about beliefs. We consider agents with beliefs about the counterfactual consequences of action for states of affairs and beliefs about those latent states. In other words, we move from simply considering beliefs about "what would happen if I did that" to "what I would believe about what would happen if I did that." The recursive form of the free energy functional effectively implements a deep tree search over actions and outcomes in the future. Crucially, this search is over sequences of belief states as opposed to states per se. We illustrate the competence of this scheme using numerical simulations of deep decision problems.
Collapse
Affiliation(s)
- Karl Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London WC1N 3AR, U.K.
| | - Lancelot Da Costa
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London WC1N 3AR, U.K., and Department of Mathematics, Imperial College London, U.K.
| | - Danijar Hafner
- Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada, and Google Research, Brain Team, Toronto, ON MSH 153, Canada
| | - Casper Hesp
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London WC1N 3AR, U.K., and Amsterdam Brain and Cognition Center, University of Amsterdam, Amsterdam 1001 NK, The Netherlands
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London WC1N 3AR, U.K.
| |
Collapse
|
3
|
Şenöz İ, van de Laar T, Bagaev D, de Vries B. Variational Message Passing and Local Constraint Manipulation in Factor Graphs. ENTROPY 2021; 23:e23070807. [PMID: 34202913 PMCID: PMC8303273 DOI: 10.3390/e23070807] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/18/2021] [Accepted: 06/22/2021] [Indexed: 11/30/2022]
Abstract
Accurate evaluation of Bayesian model evidence for a given data set is a fundamental problem in model development. Since evidence evaluations are usually intractable, in practice variational free energy (VFE) minimization provides an attractive alternative, as the VFE is an upper bound on negative model log-evidence (NLE). In order to improve tractability of the VFE, it is common to manipulate the constraints in the search space for the posterior distribution of the latent variables. Unfortunately, constraint manipulation may also lead to a less accurate estimate of the NLE. Thus, constraint manipulation implies an engineering trade-off between tractability and accuracy of model evidence estimation. In this paper, we develop a unifying account of constraint manipulation for variational inference in models that can be represented by a (Forney-style) factor graph, for which we identify the Bethe Free Energy as an approximation to the VFE. We derive well-known message passing algorithms from first principles, as the result of minimizing the constrained Bethe Free Energy (BFE). The proposed method supports evaluation of the BFE in factor graphs for model scoring and development of new message passing-based inference algorithms that potentially improve evidence estimation accuracy.
Collapse
Affiliation(s)
- İsmail Şenöz
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (T.v.d.L.); (D.B.); (B.d.V.)
- Correspondence:
| | - Thijs van de Laar
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (T.v.d.L.); (D.B.); (B.d.V.)
| | - Dmitry Bagaev
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (T.v.d.L.); (D.B.); (B.d.V.)
| | - Bert de Vries
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (T.v.d.L.); (D.B.); (B.d.V.)
- GN Hearing, JF Kennedylaan 2, 5612 AB Eindhoven, The Netherlands
| |
Collapse
|
4
|
Catenacci Volpi N, Polani D. Space Emerges from What We Know-Spatial Categorisations Induced by Information Constraints. ENTROPY 2020; 22:e22101179. [PMID: 33286947 PMCID: PMC7597350 DOI: 10.3390/e22101179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 10/12/2020] [Accepted: 10/14/2020] [Indexed: 11/16/2022]
Abstract
Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that this is the representation that is actually employed by biological organisms or that should be used by biologically plausible models. Instead, we operate by the assumption that biological organisms are constrained in their information processing capacities, which in the past has led to a number of insightful hypotheses and models for biologically plausible behaviour generation. Here we use this approach to study various types of spatial categorizations that emerge through such informational constraints imposed on embodied agents. We will see that geometrically-rich spatial representations emerge when agents employ a trade-off between the minimisation of the Shannon information used to describe locations within the environment and the reduction of the location error generated by the resulting approximate spatial description. In addition, agents do not always need to construct these representations from the ground up, but they can obtain them by refining less precise spatial descriptions constructed previously. Importantly, we find that these can be optimal at both steps of refinement, as guaranteed by the successive refinement principle from information theory. Finally, clusters induced by these spatial representations via the information bottleneck method are able to reflect the environment’s topology without relying on an explicit geometric description of the environment’s structure. Our findings suggest that the fundamental geometric notions possessed by natural agents do not need to be part of their a priori knowledge but could emerge as a byproduct of the pressure to process information parsimoniously.
Collapse
|
5
|
Cieri F, Esposito R. Psychoanalysis and Neuroscience: The Bridge Between Mind and Brain. Front Psychol 2019; 10:1790. [PMID: 31555159 PMCID: PMC6724748 DOI: 10.3389/fpsyg.2019.01983] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 08/13/2019] [Indexed: 01/12/2023] Open
Abstract
In 1895 in the Project for a Scientific Psychology, Freud tried to integrate psychology and neurology in order to develop a neuroscientific psychology. Since 1880, Freud made no distinction between psychology and physiology. His papers from the end of the 1880s to 1890 were very clear on this scientific overlap: as with many of his contemporaries, Freud thought about psychology essentially as the physiology of the brain. Years later he had to surrender, realizing a technological delay, not capable of pursuing its ambitious aim, and until that moment psychoanalysis would have to use its more suitable clinical method. Also, he seemed skeptical about phrenology drift, typical of that time, in which any psychological function needed to be located in its neuroanatomical area. He could not see the progresses of neuroscience and its fruitful dialogue with psychoanalysis, which occurred also thanks to the improvements in the field of neuroimaging, which has made possible a remarkable advance in the knowledge of the mind-brain system and a better observation of the psychoanalytical theories. After years of investigations, deriving from research and clinical work of the last century, the discovery of neural networks, together with the free energy principle, we are observing under a new light psychodynamic neuroscience in its exploration of the mind-brain system. In this manuscript, we summarize the important developments of psychodynamic neuroscience, with particular regard to the free energy principle, the resting state networks, especially the Default Mode Network in its link with the Self, emphasizing our view of a bridge between psychoanalysis and neuroscience. Finally, we suggest a discussion by approaching the concept of Alpha Function, proposed by the psychoanalyst Wilfred Ruprecht Bion, continuing the association with neuroscience.
Collapse
Affiliation(s)
- Filippo Cieri
- Department of Neurology, Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Roberto Esposito
- Department of Radiology, Azienda Ospedaliera Ospedali Riuniti Marche Nord, Pesaro, Italy
| |
Collapse
|
6
|
Gottwald S, Braun DA. Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty. ENTROPY 2019; 21:e21040375. [PMID: 33267089 PMCID: PMC7514859 DOI: 10.3390/e21040375] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 04/02/2019] [Accepted: 04/04/2019] [Indexed: 11/20/2022]
Abstract
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures.
Collapse
|
7
|
Di Franco A. Information-gain computation in the Fifth system. Int J Approx Reason 2019. [DOI: 10.1016/j.ijar.2018.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
8
|
Biehl M, Guckelsberger C, Salge C, Smith SC, Polani D. Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop. Front Neurorobot 2018; 12:45. [PMID: 30214404 PMCID: PMC6125413 DOI: 10.3389/fnbot.2018.00045] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
Active inference is an ambitious theory that treats perception, inference, and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g., different environments or agent morphologies. In the literature, paradigms that share this independence have been summarized under the notion of intrinsic motivations. In general and in contrast to active inference, these models of motivation come without a commitment to particular inference and action selection mechanisms. In this article, we study if the inference and action selection machinery of active inference can also be used by alternatives to the originally included intrinsic motivation. The perception-action loop explicitly relates inference and action selection to the environment and agent memory, and is consequently used as foundation for our analysis. We reconstruct the active inference approach, locate the original formulation within, and show how alternative intrinsic motivations can be used while keeping many of the original features intact. Furthermore, we illustrate the connection to universal reinforcement learning by means of our formalism. Active inference research may profit from comparisons of the dynamics induced by alternative intrinsic motivations. Research on intrinsic motivations may profit from an additional way to implement intrinsically motivated agents that also share the biological plausibility of active inference.
Collapse
Affiliation(s)
| | - Christian Guckelsberger
- Computational Creativity Group, Department of Computing, Goldsmiths, University of London, London, United Kingdom
| | - Christoph Salge
- Game Innovation Lab, Department of Computer Science and Engineering, New York University, New York, NY, United States
- Sepia Lab, Adaptive Systems Research Group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| | - Simón C. Smith
- Sepia Lab, Adaptive Systems Research Group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom
- Institute of Perception, Action and Behaviour, School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| | - Daniel Polani
- Sepia Lab, Adaptive Systems Research Group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| |
Collapse
|
9
|
Grau-Moya J, Krüger M, Braun DA. Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments. ENTROPY 2017; 20:e20010001. [PMID: 33265092 PMCID: PMC7512193 DOI: 10.3390/e20010001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Revised: 12/17/2017] [Accepted: 12/18/2017] [Indexed: 11/16/2022]
Abstract
Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.
Collapse
Affiliation(s)
- Jordi Grau-Moya
- Max Planck Institute for Intelligent Systems, Stuttgart 70569, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
- PROWLER.io, Cambridge CB2 1LA, UK
| | - Matthias Krüger
- Max Planck Institute for Intelligent Systems, Stuttgart 70569, Germany
- 4th Institute for Theoretical Physics, Universität Stuttgart, Stuttgart 70569, Germany
| | - Daniel A. Braun
- Max Planck Institute for Intelligent Systems, Stuttgart 70569, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
- Institute of Neural Information Processing, Universität Ulm, Ulm 89081, Germany
- Correspondence: ; Tel.: +49-731-5024150
| |
Collapse
|
10
|
van Hoof H, Tanneberg D, Peters J. Generalized exploration in policy search. Mach Learn 2017. [DOI: 10.1007/s10994-017-5657-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
11
|
Genewein T, Braun DA. Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations. BIOLOGICAL CYBERNETICS 2016; 110:135-50. [PMID: 27023096 PMCID: PMC4903113 DOI: 10.1007/s00422-016-0684-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2015] [Accepted: 03/09/2016] [Indexed: 06/05/2023]
Abstract
Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action.
Collapse
Affiliation(s)
- Tim Genewein
- />Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany
- />Max Planck Institute for Intelligent Systems, Tübingen, Germany
- />Graduate Training Centre of Neuroscience, Tübingen, Germany
| | - Daniel A. Braun
- />Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany
- />Max Planck Institute for Intelligent Systems, Tübingen, Germany
| |
Collapse
|
12
|
Genewein T, Hez E, Razzaghpanah Z, Braun DA. Structure Learning in Bayesian Sensorimotor Integration. PLoS Comput Biol 2015; 11:e1004369. [PMID: 26305797 PMCID: PMC4549275 DOI: 10.1371/journal.pcbi.1004369] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2014] [Accepted: 06/01/2015] [Indexed: 12/03/2022] Open
Abstract
Previous studies have shown that sensorimotor processing can often be described by Bayesian learning, in particular the integration of prior and feedback information depending on its degree of reliability. Here we test the hypothesis that the integration process itself can be tuned to the statistical structure of the environment. We exposed human participants to a reaching task in a three-dimensional virtual reality environment where we could displace the visual feedback of their hand position in a two dimensional plane. When introducing statistical structure between the two dimensions of the displacement, we found that over the course of several days participants adapted their feedback integration process in order to exploit this structure for performance improvement. In control experiments we found that this adaptation process critically depended on performance feedback and could not be induced by verbal instructions. Our results suggest that structural learning is an important meta-learning component of Bayesian sensorimotor integration. The human sensorimotor system has to process highly structured information that is affected by uncertainty and variability at all levels. Previously, it has been shown that sensorimotor processing is very efficient at extracting structure even in variable environments and it has also been shown how sensorimotor integration takes into account uncertainty when processing novel information. In particular, the latter integration process has been shown to be consistent with Bayesian theory. Here we show how the two processes of structure learning and sensorimotor integration work together in a single experiment. We find that when human participants learn a novel motor skill they not only successfully extract structural knowledge from variable data, but they also exploit this structural knowledge for near-optimal sensorimotor integration.
Collapse
Affiliation(s)
- Tim Genewein
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
- Graduate Training Centre of Neuroscience, Tübingen, Germany
- * E-mail:
| | - Eduard Hez
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
- University of Tübingen, Tübingen, Germany
| | - Zeynab Razzaghpanah
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
- University of Tübingen, Tübingen, Germany
| | - Daniel A. Braun
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| |
Collapse
|
13
|
Friston K, Schwartenbeck P, FitzGerald T, Moutoussis M, Behrens T, Dolan RJ. The anatomy of choice: dopamine and decision-making. Philos Trans R Soc Lond B Biol Sci 2015; 369:rstb.2013.0481. [PMID: 25267823 PMCID: PMC4186234 DOI: 10.1098/rstb.2013.0481] [Citation(s) in RCA: 154] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses—and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Philipp Schwartenbeck
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Thomas FitzGerald
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Michael Moutoussis
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Timothy Behrens
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK Centre for Functional MRI of the Brain, The John Radcliffe Hospital, Headley Way, Oxford OX3 9DU, UK
| | - Raymond J Dolan
- The Wellcome Trust Centre for Neuroimaging, University College London, 12 Queen Square, London WC1N 3BG, UK
| |
Collapse
|
14
|
|
15
|
Ortega PA, Braun DA. Generalized Thompson sampling for sequential decision-making and causal inference. ACTA ACUST UNITED AC 2014. [DOI: 10.1186/2194-3206-2-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Purpose
Sampling an action according to the probability that the action is believed to be the optimal one is sometimes called Thompson sampling.
Methods
Although mostly applied to bandit problems, Thompson sampling can also be used to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution over actions can then be constructed by a Bayesian superposition of the policies weighted by their posterior probability of being optimal.
Results
Here we discuss two important features of this approach. First, we show in how far such generalized Thompson sampling can be regarded as an optimal strategy under limited information processing capabilities that constrain the sampling complexity of the decision-making process. Second, we show how such Thompson sampling can be extended to solve causal inference problems when interacting with an environment in a sequential fashion.
Conclusion
In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.
Collapse
|
16
|
Abstract
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology.
Collapse
|