1
|
Granwald T, Dayan P, Lengyel M, Guitart-Masip M. A task-invariant prior explains trial-by-trial active avoidance behaviour across gain and loss tasks. COMMUNICATIONS PSYCHOLOGY 2025; 3:82. [PMID: 40404877 PMCID: PMC12098998 DOI: 10.1038/s44271-025-00254-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Accepted: 04/23/2025] [Indexed: 05/24/2025]
Abstract
Failing to make decisions that would actively avoid negative outcomes is central to helplessness. In a Bayesian framework, deciding whether to act is informed by beliefs about the world that can be characterised as priors. However, these priors have not been previously quantified. Here we administered two tasks in which 279 participants decided whether to attempt active avoidance actions. In both tasks, participants decided between a passive option that would for sure result in a negative outcome of varying size, and a costly active option that allowed them a probability of avoiding the negative outcome. The tasks differed in framing and valence, allowing us to test whether the prior generating biases in behaviour is problem-specific or task-independent and general. We performed extensive comparisons of models offering different structural explanations of the data, finding that a Bayesian model with a task-invariant prior for active avoidance provided the best fit to participants' trial-by-trial behaviour. The parameters of this prior were reliable, and participants' self-rated positive affect was weakly correlated with this prior such that participants with an optimistic prior reported higher levels of positive affect. These results show that individual differences in prior beliefs can explain decisions to engage in active avoidance of negative outcomes, providing evidence for a Bayesian conceptualization of helplessness.
Collapse
Affiliation(s)
- Tobias Granwald
- Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden.
- Center for Cognitive and Computational Neuropsychiatry (CCNP), Karolinska Institutet, Stockholm, Sweden.
| | - Peter Dayan
- MPI for Biological Cybernetics, Tübingen, Germany
- University of Tübingen, Tübingen, Germany
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Marc Guitart-Masip
- Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden.
- Center for Cognitive and Computational Neuropsychiatry (CCNP), Karolinska Institutet, Stockholm, Sweden.
- Center for Psychiatry Research, Region Stockholm, Stockholm, Sweden.
| |
Collapse
|
2
|
Pjanovic V, Zavatone-Veth J, Masset P, Keemink S, Nardin M. Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.25.640158. [PMID: 40060526 PMCID: PMC11888369 DOI: 10.1101/2025.02.25.640158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/15/2025]
Abstract
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior effectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions-including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples-derived from noisy inputs-with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity "bump" representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Collapse
Affiliation(s)
- Vojko Pjanovic
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Jacob Zavatone-Veth
- Society of Fellows and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Paul Masset
- Department of Psychology, McGill University, Montréal QC, Canada
| | - Sander Keemink
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Michele Nardin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| |
Collapse
|
3
|
Axenie C. Antifragile control systems in neuronal processing: a sensorimotor perspective. BIOLOGICAL CYBERNETICS 2025; 119:7. [PMID: 39954086 PMCID: PMC11829851 DOI: 10.1007/s00422-025-01003-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 01/09/2025] [Indexed: 02/17/2025]
Abstract
The stability-robustness-resilience-adaptiveness continuum in neuronal processing follows a hierarchical structure that explains interactions and information processing among the different time scales. Interestingly, using "canonical" neuronal computational circuits, such as Homeostatic Activity Regulation, Winner-Take-All, and Hebbian Temporal Correlation Learning, one can extend the behavior spectrum towards antifragility. Cast already in both probability theory and dynamical systems, antifragility can explain and define the interesting interplay among neural circuits, found, for instance, in sensorimotor control in the face of uncertainty and volatility. This perspective proposes a new framework to analyze and describe closed-loop neuronal processing using principles of antifragility, targeting sensorimotor control. Our objective is two-fold. First, we introduce antifragile control as a conceptual framework to quantify closed-loop neuronal network behaviors that gain from uncertainty and volatility. Second, we introduce neuronal network design principles, opening the path to neuromorphic implementations and transfer to technical systems.
Collapse
Affiliation(s)
- Cristian Axenie
- Department of Computer Science and Center for Artificial Intelligence, Technische Hochschule Nürnberg Georg Simon Ohm, Keßlerplatz 12, 90489, Nuremberg, Germany.
| |
Collapse
|
4
|
Lam NH, Mukherjee A, Wimmer RD, Nassar MR, Chen ZS, Halassa MM. Prefrontal transthalamic uncertainty processing drives flexible switching. Nature 2025; 637:127-136. [PMID: 39537928 PMCID: PMC11841214 DOI: 10.1038/s41586-024-08180-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 10/08/2024] [Indexed: 11/16/2024]
Abstract
Making adaptive decisions in complex environments requires appropriately identifying sources of error1,2. The frontal cortex is critical for adaptive decisions, but its neurons show mixed selectivity to task features3 and their uncertainty estimates4, raising the question of how errors are attributed to their most likely causes. Here, by recording neural responses from tree shrews (Tupaia belangeri) performing a hierarchical decision task with rule reversals, we find that the mediodorsal thalamus independently represents cueing and rule uncertainty. This enables the relevant thalamic population to drive prefrontal reconfiguration following a reversal by appropriately attributing errors to an environmental change. Mechanistic dissection of behavioural switching revealed a transthalamic pathway for cingulate cortical error monitoring5,6 to reconfigure prefrontal executive control7. Overall, our work highlights a potential role for the thalamus in demixing cortical signals while providing a low-dimensional pathway for cortico-cortical communication.
Collapse
Affiliation(s)
- Norman H Lam
- Department of Neuroscience, Tufts University, Boston, MA, USA
| | | | - Ralf D Wimmer
- Department of Neuroscience, Tufts University, Boston, MA, USA
| | - Matthew R Nassar
- Department of Neuroscience, Brown University, Providence, RI, USA
| | - Zhe Sage Chen
- Department of Neuroscience and Physiology, Grossman School of Medicine, New York University, New York, NY, USA
- Department of Psychiatry, Grossman School of Medicine, New York University, New York, NY, USA
| | - Michael M Halassa
- Department of Neuroscience, Tufts University, Boston, MA, USA.
- Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
5
|
Goodwin I, Hester R, Garrido MI. Temporal stability of Bayesian belief updating in perceptual decision-making. Behav Res Methods 2024; 56:6349-6362. [PMID: 38129733 PMCID: PMC11335944 DOI: 10.3758/s13428-023-02306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/24/2023] [Indexed: 12/23/2023]
Abstract
Bayesian inference suggests that perception is inferred from a weighted integration of prior contextual beliefs with current sensory evidence (likelihood) about the world around us. The perceived precision or uncertainty associated with prior and likelihood information is used to guide perceptual decision-making, such that more weight is placed on the source of information with greater precision. This provides a framework for understanding a spectrum of clinical transdiagnostic symptoms associated with aberrant perception, as well as individual differences in the general population. While behavioral paradigms are commonly used to characterize individual differences in perception as a stable characteristic, measurement reliability in these behavioral tasks is rarely assessed. To remedy this gap, we empirically evaluate the reliability of a perceptual decision-making task that quantifies individual differences in Bayesian belief updating in terms of the relative precision weighting afforded to prior and likelihood information (i.e., sensory weight). We analyzed data from participants (n = 37) who performed this task twice. We found that the precision afforded to prior and likelihood information showed high internal consistency and good test-retest reliability (ICC = 0.73, 95% CI [0.53, 0.85]) when averaged across participants, as well as at the individual level using hierarchical modeling. Our results provide support for the assumption that Bayesian belief updating operates as a stable characteristic in perceptual decision-making. We discuss the utility and applicability of reliable perceptual decision-making paradigms as a measure of individual differences in the general population, as well as a diagnostic tool in psychiatric research.
Collapse
Affiliation(s)
- Isabella Goodwin
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville Campus, Melbourne, Victoria, 3010, Australia.
| | - Robert Hester
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville Campus, Melbourne, Victoria, 3010, Australia
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville Campus, Melbourne, Victoria, 3010, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
6
|
Granier A, Petrovici MA, Senn W, Wilmes KA. Confidence and second-order errors in cortical circuits. PNAS NEXUS 2024; 3:pgae404. [PMID: 39346625 PMCID: PMC11437657 DOI: 10.1093/pnasnexus/pgae404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/30/2024] [Indexed: 10/01/2024]
Abstract
Minimization of cortical prediction errors has been considered a key computational goal of the cerebral cortex underlying perception, action, and learning. However, it is still unclear how the cortex should form and use information about uncertainty in this process. Here, we formally derive neural dynamics that minimize prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their confidence (inverse expected uncertainty) in their predictions. In the resulting neuronal dynamics, the integration of bottom-up and top-down cortical streams is dynamically modulated based on confidence in accordance with the Bayesian principle. Moreover, the theory predicts the existence of cortical second-order errors, comparing confidence and actual performance. These errors are propagated through the cortical hierarchy alongside classical prediction errors and are used to learn the weights of synapses responsible for formulating confidence. We propose a detailed mapping of the theory to cortical circuitry, discuss entailed functional interpretations, and provide potential directions for experimental work.
Collapse
Affiliation(s)
- Arno Granier
- Department of Physiology, University of Bern, Bühlplatz 5, Bern 3012, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bühlplatz 5, Bern 3012, Switzerland
| | - Walter Senn
- Department of Physiology, University of Bern, Bühlplatz 5, Bern 3012, Switzerland
| | - Katharina A Wilmes
- Department of Physiology, University of Bern, Bühlplatz 5, Bern 3012, Switzerland
| |
Collapse
|
7
|
Kessler F, Frankenstein J, Rothkopf CA. Human navigation strategies and their errors result from dynamic interactions of spatial uncertainties. Nat Commun 2024; 15:5677. [PMID: 38971789 PMCID: PMC11227593 DOI: 10.1038/s41467-024-49722-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/14/2024] [Indexed: 07/08/2024] Open
Abstract
Goal-directed navigation requires continuously integrating uncertain self-motion and landmark cues into an internal sense of location and direction, concurrently planning future paths, and sequentially executing motor actions. Here, we provide a unified account of these processes with a computational model of probabilistic path planning in the framework of optimal feedback control under uncertainty. This model gives rise to diverse human navigational strategies previously believed to be distinct behaviors and predicts quantitatively both the errors and the variability of navigation across numerous experiments. This furthermore explains how sequential egocentric landmark observations form an uncertain allocentric cognitive map, how this internal map is used both in route planning and during execution of movements, and reconciles seemingly contradictory results about cue-integration behavior in navigation. Taken together, the present work provides a parsimonious explanation of how patterns of human goal-directed navigation behavior arise from the continuous and dynamic interactions of spatial uncertainties in perception, cognition, and action.
Collapse
Affiliation(s)
- Fabian Kessler
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Julia Frankenstein
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
8
|
Wolff M, Halassa MM. The mediodorsal thalamus in executive control. Neuron 2024; 112:893-908. [PMID: 38295791 DOI: 10.1016/j.neuron.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 11/15/2023] [Accepted: 01/03/2024] [Indexed: 03/23/2024]
Abstract
Executive control, the ability to organize thoughts and action plans in real time, is a defining feature of higher cognition. Classical theories have emphasized cortical contributions to this process, but recent studies have reinvigorated interest in the role of the thalamus. Although it is well established that local thalamic damage diminishes cognitive capacity, such observations have been difficult to inform functional models. Recent progress in experimental techniques is beginning to enrich our understanding of the anatomical, physiological, and computational substrates underlying thalamic engagement in executive control. In this review, we discuss this progress and particularly focus on the mediodorsal thalamus, which regulates the activity within and across frontal cortical areas. We end with a synthesis that highlights frontal thalamocortical interactions in cognitive computations and discusses its functional implications in normal and pathological conditions.
Collapse
Affiliation(s)
- Mathieu Wolff
- University of Bordeaux, CNRS, INCIA, UMR 5287, 33000 Bordeaux, France.
| | - Michael M Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA; Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
9
|
Peters B, DiCarlo JJ, Gureckis T, Haefner R, Isik L, Tenenbaum J, Konkle T, Naselaris T, Stachenfeld K, Tavares Z, Tsao D, Yildirim I, Kriegeskorte N. How does the primate brain combine generative and discriminative computations in vision? ARXIV 2024:arXiv:2401.06005v1. [PMID: 38259351 PMCID: PMC10802669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
Collapse
Affiliation(s)
- Benjamin Peters
- Zuckerman Mind Brain Behavior Institute, Columbia University
- School of Psychology & Neuroscience, University of Glasgow
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Quest for Intelligence, Schwarzman College of Computing, MIT
| | | | - Ralf Haefner
- Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University
| | - Joshua Tenenbaum
- Department of Brain and Cognitive Sciences, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Natural and Artificial Intelligence, Harvard University
| | | | | | - Zenna Tavares
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Data Science Institute, Columbia University
| | - Doris Tsao
- Dept of Molecular & Cell Biology, University of California Berkeley
- Howard Hughes Medical Institute
| | - Ilker Yildirim
- Department of Psychology, Yale University
- Department of Statistics and Data Science, Yale University
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Department of Psychology, Columbia University
- Department of Neuroscience, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
10
|
Lin CHS, Do TT, Unsworth L, Garrido MI. Are we really Bayesian? Probabilistic inference shows sub-optimal knowledge transfer. PLoS Comput Biol 2024; 20:e1011769. [PMID: 38190413 PMCID: PMC10798629 DOI: 10.1371/journal.pcbi.1011769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 01/19/2024] [Accepted: 12/18/2023] [Indexed: 01/10/2024] Open
Abstract
Numerous studies have found that the Bayesian framework, which formulates the optimal integration of the knowledge of the world (i.e. prior) and current sensory evidence (i.e. likelihood), captures human behaviours sufficiently well. However, there are debates regarding whether humans use precise but cognitively demanding Bayesian computations for behaviours. Across two studies, we trained participants to estimate hidden locations of a target drawn from priors with different levels of uncertainty. In each trial, scattered dots provided noisy likelihood information about the target location. Participants showed that they learned the priors and combined prior and likelihood information to infer target locations in a Bayes fashion. We then introduced a transfer condition presenting a trained prior and a likelihood that has never been put together during training. How well participants integrate this novel likelihood with their learned prior is an indicator of whether participants perform Bayesian computations. In one study, participants experienced the newly introduced likelihood, which was paired with a different prior, during training. Participants changed likelihood weighting following expected directions although the degrees of change were significantly lower than Bayes-optimal predictions. In another group, the novel likelihoods were never used during training. We found people integrated a new likelihood within (interpolation) better than the one outside (extrapolation) the range of their previous learning experience and they were quantitatively Bayes-suboptimal in both. We replicated the findings of both studies in a validation dataset. Our results showed that Bayesian behaviours may not always be achieved by a full Bayesian computation. Future studies can apply our approach to different tasks to enhance the understanding of decision-making mechanisms.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Trang Thuy Do
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Lee Unsworth
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Marta I. Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
11
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
12
|
Walker EY, Pohl S, Denison RN, Barack DL, Lee J, Block N, Ma WJ, Meyniel F. Studying the neural representations of uncertainty. Nat Neurosci 2023; 26:1857-1867. [PMID: 37814025 DOI: 10.1038/s41593-023-01444-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/30/2023] [Indexed: 10/11/2023]
Abstract
The study of the brain's representations of uncertainty is a central topic in neuroscience. Unlike most quantities of which the neural representation is studied, uncertainty is a property of an observer's beliefs about the world, which poses specific methodological challenges. We analyze how the literature on the neural representations of uncertainty addresses those challenges and distinguish between 'code-driven' and 'correlational' approaches. Code-driven approaches make assumptions about the neural code for representing world states and the associated uncertainty. By contrast, correlational approaches search for relationships between uncertainty and neural activity without constraints on the neural representation of the world state that this uncertainty accompanies. To compare these two approaches, we apply several criteria for neural representations: sensitivity, specificity, invariance and functionality. Our analysis reveals that the two approaches lead to different but complementary findings, shaping new research questions and guiding future experiments.
Collapse
Affiliation(s)
- Edgar Y Walker
- Department of Physiology and Biophysics, Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - Stephan Pohl
- Department of Philosophy, New York University, New York, NY, USA
| | - Rachel N Denison
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
| | - David L Barack
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Department of Philosophy, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Lee
- Center for Neural Science, New York University, New York, NY, USA
| | - Ned Block
- Department of Philosophy, New York University, New York, NY, USA
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif-sur-Yvette, France.
| |
Collapse
|
13
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
14
|
Gallinaro JV, Scholl B, Clopath C. Synaptic weights that correlate with presynaptic selectivity increase decoding performance. PLoS Comput Biol 2023; 19:e1011362. [PMID: 37549193 PMCID: PMC10434873 DOI: 10.1371/journal.pcbi.1011362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 08/17/2023] [Accepted: 07/16/2023] [Indexed: 08/09/2023] Open
Abstract
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania, United States of America
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
15
|
Bounmy T, Eger E, Meyniel F. A characterization of the neural representation of confidence during probabilistic learning. Neuroimage 2023; 268:119849. [PMID: 36640947 DOI: 10.1016/j.neuroimage.2022.119849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 12/09/2022] [Accepted: 12/29/2022] [Indexed: 01/13/2023] Open
Abstract
Learning in a stochastic and changing environment is a difficult task. Models of learning typically postulate that observations that deviate from the learned predictions are surprising and used to update those predictions. Bayesian accounts further posit the existence of a confidence-weighting mechanism: learning should be modulated by the confidence level that accompanies those predictions. However, the neural bases of this confidence are much less known than the ones of surprise. Here, we used a dynamic probability learning task and high-field MRI to identify putative cortical regions involved in the representation of confidence about predictions during human learning. We devised a stringent test based on the conjunction of four criteria. We localized several regions in parietal and frontal cortices whose activity is sensitive to the confidence of an ideal observer, specifically so with respect to potential confounds (surprise and predictability), and in a way that is invariant to which item is predicted. We also tested for functionality in two ways. First, we localized regions whose activity patterns at the subject level showed an effect of both confidence and surprise in qualitative agreement with the confidence-weighting principle. Second, we found neural representations of ideal confidence that also accounted for subjective confidence. Taken together, those results identify a set of cortical regions potentially implicated in the confidence-weighting of learning.
Collapse
Affiliation(s)
- Tiffany Bounmy
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France; Université de Paris, Paris, France.
| | - Evelyn Eger
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France.
| |
Collapse
|
16
|
Blank H, Alink A, Büchel C. Multivariate functional neuroimaging analyses reveal that strength-dependent face expectations are represented in higher-level face-identity areas. Commun Biol 2023; 6:135. [PMID: 36725984 PMCID: PMC9892564 DOI: 10.1038/s42003-023-04508-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023] Open
Abstract
Perception is an active inference in which prior expectations are combined with sensory input. It is still unclear how the strength of prior expectations is represented in the human brain. The strength, or precision, of a prior could be represented with its content, potentially in higher-level sensory areas. We used multivariate analyses of functional resonance imaging data to test whether expectation strength is represented together with the expected face in high-level face-sensitive regions. Participants were trained to associate images of scenes with subsequently presented images of different faces. Each scene predicted three faces, each with either low, intermediate, or high probability. We found that anticipation enhances the similarity of response patterns in the face-sensitive anterior temporal lobe to response patterns specifically associated with the image of the expected face. In contrast, during face presentation, activity increased for unexpected faces in a typical prediction error network, containing areas such as the caudate and the insula. Our findings show that strength-dependent face expectations are represented in higher-level face-identity areas, supporting hierarchical theories of predictive processing according to which higher-level sensory regions represent weighted priors.
Collapse
Affiliation(s)
- Helen Blank
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Arjen Alink
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Christian Büchel
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
17
|
Heald JB, Lengyel M, Wolpert DM. Contextual inference in learning and memory. Trends Cogn Sci 2023; 27:43-64. [PMID: 36435674 PMCID: PMC9789331 DOI: 10.1016/j.tics.2022.10.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 10/11/2022] [Accepted: 10/12/2022] [Indexed: 11/25/2022]
Abstract
Context is widely regarded as a major determinant of learning and memory across numerous domains, including classical and instrumental conditioning, episodic memory, economic decision-making, and motor learning. However, studies across these domains remain disconnected due to the lack of a unifying framework formalizing the concept of context and its role in learning. Here, we develop a unified vernacular allowing direct comparisons between different domains of contextual learning. This leads to a Bayesian model positing that context is unobserved and needs to be inferred. Contextual inference then controls the creation, expression, and updating of memories. This theoretical approach reveals two distinct components that underlie adaptation, proper and apparent learning, respectively referring to the creation and updating of memories versus time-varying adjustments in their expression. We review a number of extensions of the basic Bayesian model that allow it to account for increasingly complex forms of contextual learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary.
| | - Daniel M Wolpert
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| |
Collapse
|
18
|
Efficient coding theory of dynamic attentional modulation. PLoS Biol 2022; 20:e3001889. [PMID: 36542662 PMCID: PMC9831638 DOI: 10.1371/journal.pbio.3001889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/10/2023] [Accepted: 10/24/2022] [Indexed: 12/24/2022] Open
Abstract
Activity of sensory neurons is driven not only by external stimuli but also by feedback signals from higher brain areas. Attention is one particularly important internal signal whose presumed role is to modulate sensory representations such that they only encode information currently relevant to the organism at minimal cost. This hypothesis has, however, not yet been expressed in a normative computational framework. Here, by building on normative principles of probabilistic inference and efficient coding, we developed a model of dynamic population coding in the visual cortex. By continuously adapting the sensory code to changing demands of the perceptual observer, an attention-like modulation emerges. This modulation can dramatically reduce the amount of neural activity without deteriorating the accuracy of task-specific inferences. Our results suggest that a range of seemingly disparate cortical phenomena such as intrinsic gain modulation, attention-related tuning modulation, and response variability could be manifestations of the same underlying principles, which combine efficient sensory coding with optimal probabilistic inference in dynamic environments.
Collapse
|
19
|
Mancini F, Zhang S, Seymour B. Computational and neural mechanisms of statistical pain learning. Nat Commun 2022; 13:6613. [PMID: 36329014 PMCID: PMC9633765 DOI: 10.1038/s41467-022-34283-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 10/11/2022] [Indexed: 11/06/2022] Open
Abstract
Pain invariably changes over time. These fluctuations contain statistical regularities which, in theory, could be learned by the brain to generate expectations and control responses. We demonstrate that humans learn to extract these regularities and explicitly predict the likelihood of forthcoming pain intensities in a manner consistent with optimal Bayesian inference with dynamic update of beliefs. Healthy participants received probabilistic, volatile sequences of low and high-intensity electrical stimuli to the hand during brain fMRI. The inferred frequency of pain correlated with activity in sensorimotor cortical regions and dorsal striatum, whereas the uncertainty of these inferences was encoded in the right superior parietal cortex. Unexpected changes in stimulus frequencies drove the update of internal models by engaging premotor, prefrontal and posterior parietal regions. This study extends our understanding of sensory processing of pain to include the generation of Bayesian internal models of the temporal statistics of pain.
Collapse
Affiliation(s)
- Flavia Mancini
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK.
| | - Suyi Zhang
- Wellcome Centre for Integrative Neuroimaging, John Radcliffe Hospital, Headington, Oxford, OX3 9DU, UK
| | - Ben Seymour
- Wellcome Centre for Integrative Neuroimaging, John Radcliffe Hospital, Headington, Oxford, OX3 9DU, UK
- Center for Information and Neural Networks (CiNet), 1-4 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| |
Collapse
|
20
|
Abstract
Vision and learning have long been considered to be two areas of research linked only distantly. However, recent developments in vision research have changed the conceptual definition of vision from a signal-evaluating process to a goal-oriented interpreting process, and this shift binds learning, together with the resulting internal representations, intimately to vision. In this review, we consider various types of learning (perceptual, statistical, and rule/abstract) associated with vision in the past decades and argue that they represent differently specialized versions of the fundamental learning process, which must be captured in its entirety when applied to complex visual processes. We show why the generalized version of statistical learning can provide the appropriate setup for such a unified treatment of learning in vision, what computational framework best accommodates this kind of statistical learning, and what plausible neural scheme could feasibly implement this framework. Finally, we list the challenges that the field of statistical learning faces in fulfilling the promise of being the right vehicle for advancing our understanding of vision in its entirety. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- József Fiser
- Department of Cognitive Science, Center for Cognitive Computation, Central European University, Vienna 1100, Austria;
| | - Gábor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| |
Collapse
|
21
|
Abstract
Recent breakthroughs in artificial intelligence (AI) have enabled machines to plan in tasks previously thought to be uniquely human. Meanwhile, the planning algorithms implemented by the brain itself remain largely unknown. Here, we review neural and behavioral data in sequential decision-making tasks that elucidate the ways in which the brain does-and does not-plan. To systematically review available biological data, we create a taxonomy of planning algorithms by summarizing the relevant design choices for such algorithms in AI. Across species, recording techniques, and task paradigms, we find converging evidence that the brain represents future states consistent with a class of planning algorithms within our taxonomy-focused, depth-limited, and serial. However, we argue that current data are insufficient for addressing more detailed algorithmic questions. We propose a new approach leveraging AI advances to drive experiments that can adjudicate between competing candidate algorithms.
Collapse
|
22
|
Matsumoto M, Abe H, Tanaka K, Matsumoto K. Different types of uncertainty distinguished by monkey prefrontal neurons. Cereb Cortex Commun 2022; 3:tgac002. [PMID: 35169710 PMCID: PMC8842276 DOI: 10.1093/texcom/tgac002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 11/15/2022] Open
Abstract
To adapt one's behavior, in a timely manner, to an environment that changes in many different aspects, one must be sensitive to uncertainty about each aspect of the environment. Although the medial prefrontal cortex has been implicated in the representation and reduction of a variety of uncertainties, it is unknown whether different types of uncertainty are distinguished by distinct neuronal populations. To investigate how the prefrontal cortex distinguishes between different types of uncertainty, we recorded neuronal activities from the medial and lateral prefrontal cortices of monkeys performing a visual feedback-based action-learning task in which uncertainty of coming feedback and that of context change varied asynchronously. We found that the activities of two groups of prefrontal cells represented the two different types of uncertainty. These results suggest that different types of uncertainty are represented by distinct neural populations in the prefrontal cortex.
Collapse
Affiliation(s)
- Madoka Matsumoto
- Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, National Center of Neurology and Psychiatry, 4-1-1 Ogawahigashi, Kodaira, Tokyo 187-8553, Japan
- Brain Science Institute, Tamagawa University, 6-1-1 Tamagawa-gakuen, Machida, Tokyo 194-8610, Japan
- Laboratory for Molecular Analysis of Higher Brain Function, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Laboratory for Cognitive Brain Mapping, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Hiroshi Abe
- Laboratory for Molecular Analysis of Higher Brain Function, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Laboratory for Cognitive Brain Mapping, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Keiji Tanaka
- Laboratory for Cognitive Brain Mapping, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Kenji Matsumoto
- Brain Science Institute, Tamagawa University, 6-1-1 Tamagawa-gakuen, Machida, Tokyo 194-8610, Japan
- Laboratory for Cognitive Brain Mapping, Center for Brain Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| |
Collapse
|
23
|
Masset P, Zavatone-Veth JA, Connor JP, Murthy VN, Pehlevan C. Natural gradient enables fast sampling in spiking neural networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:22018-22034. [PMID: 37476623 PMCID: PMC10358281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers-efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling-can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - J Patrick Connor
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| |
Collapse
|