1
|
Lakshminarasimhan K, Buck J, Kellendonk C, Horga G. A corticostriatal learning mechanism linking excess striatal dopamine and auditory hallucinations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.18.643990. [PMID: 40166304 PMCID: PMC11956939 DOI: 10.1101/2025.03.18.643990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Auditory hallucinations are linked to elevated striatal dopamine, but their underlying computational mechanisms have been obscured by regional heterogeneity in striatal dopamine signaling. To address this, we developed a normative circuit model in which corticostriatal plasticity in the ventral striatum is modulated by reward prediction errors to drive reinforcement learning while that in the sensory-dorsal striatum is modulated by sensory prediction errors derived from internal belief to drive self-supervised learning. We then validate the key predictions of this model using dopamine recordings across striatal regions in mice, as well as human behavior in a hybrid learning task. Finally, we find that changes in learning resulting from optogenetic stimulation of the sensory striatum in mice and individual variability in hallucination proneness in humans are best explained by selectively enhancing dopamine levels in the model sensory striatum. These findings identify plasticity mechanisms underlying biased learning of sensory expectations as a biologically plausible link between excess dopamine and hallucinations.
Collapse
Affiliation(s)
- Kaushik Lakshminarasimhan
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - Justin Buck
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| | - Christoph Kellendonk
- Department of Psychiatry, Columbia University, New York, NY, USA
- Department of Molecular Pharmacology and Therapeutics, Columbia University, New York, NY, USA
| | - Guillermo Horga
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 PMCID: PMC11551001 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
3
|
Lin CHS, Do TT, Unsworth L, Garrido MI. Are we really Bayesian? Probabilistic inference shows sub-optimal knowledge transfer. PLoS Comput Biol 2024; 20:e1011769. [PMID: 38190413 PMCID: PMC10798629 DOI: 10.1371/journal.pcbi.1011769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 01/19/2024] [Accepted: 12/18/2023] [Indexed: 01/10/2024] Open
Abstract
Numerous studies have found that the Bayesian framework, which formulates the optimal integration of the knowledge of the world (i.e. prior) and current sensory evidence (i.e. likelihood), captures human behaviours sufficiently well. However, there are debates regarding whether humans use precise but cognitively demanding Bayesian computations for behaviours. Across two studies, we trained participants to estimate hidden locations of a target drawn from priors with different levels of uncertainty. In each trial, scattered dots provided noisy likelihood information about the target location. Participants showed that they learned the priors and combined prior and likelihood information to infer target locations in a Bayes fashion. We then introduced a transfer condition presenting a trained prior and a likelihood that has never been put together during training. How well participants integrate this novel likelihood with their learned prior is an indicator of whether participants perform Bayesian computations. In one study, participants experienced the newly introduced likelihood, which was paired with a different prior, during training. Participants changed likelihood weighting following expected directions although the degrees of change were significantly lower than Bayes-optimal predictions. In another group, the novel likelihoods were never used during training. We found people integrated a new likelihood within (interpolation) better than the one outside (extrapolation) the range of their previous learning experience and they were quantitatively Bayes-suboptimal in both. We replicated the findings of both studies in a validation dataset. Our results showed that Bayesian behaviours may not always be achieved by a full Bayesian computation. Future studies can apply our approach to different tasks to enhance the understanding of decision-making mechanisms.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Trang Thuy Do
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Lee Unsworth
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Marta I. Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
4
|
Feather J, Leclerc G, Mądry A, McDermott JH. Model metamers reveal divergent invariances between biological and artificial neural networks. Nat Neurosci 2023; 26:2017-2034. [PMID: 37845543 PMCID: PMC10620097 DOI: 10.1038/s41593-023-01442-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 08/29/2023] [Indexed: 10/18/2023]
Abstract
Deep neural network models of sensory systems are often proposed to learn representational transformations with invariances like those in the brain. To reveal these invariances, we generated 'model metamers', stimuli whose activations within a model stage are matched to those of a natural stimulus. Metamers for state-of-the-art supervised and unsupervised neural network models of vision and audition were often completely unrecognizable to humans when generated from late model stages, suggesting differences between model and human invariances. Targeted model changes improved human recognizability of model metamers but did not eliminate the overall human-model discrepancy. The human recognizability of a model's metamers was well predicted by their recognizability by other models, suggesting that models contain idiosyncratic invariances in addition to those required by the task. Metamer recognizability dissociated from both traditional brain-based benchmarks and adversarial vulnerability, revealing a distinct failure mode of existing sensory models and providing a complementary benchmark for model assessment.
Collapse
Affiliation(s)
- Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Center for Computational Neuroscience, Flatiron Institute, Cambridge, MA, USA.
| | - Guillaume Leclerc
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aleksander Mądry
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
5
|
Charlton JA, Młynarski WF, Bai YH, Hermundstad AM, Goris RLT. Environmental dynamics shape perceptual decision bias. PLoS Comput Biol 2023; 19:e1011104. [PMID: 37289753 DOI: 10.1371/journal.pcbi.1011104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/13/2023] [Indexed: 06/10/2023] Open
Abstract
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | | | - Yoon H Bai
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
6
|
A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data. eNeuro 2023; 10:ENEURO.0144-22.2022. [PMID: 36316119 PMCID: PMC9833051 DOI: 10.1523/eneuro.0144-22.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/14/2022] [Accepted: 10/24/2022] [Indexed: 01/14/2023] Open
Abstract
A central question in neuroscience is how sensory inputs are transformed into percepts. At this point, it is clear that this process is strongly influenced by prior knowledge of the sensory environment. Bayesian ideal observer models provide a useful link between data and theory that can help researchers evaluate how prior knowledge is represented and integrated with incoming sensory information. However, the statistical prior employed by a Bayesian observer cannot be measured directly, and must instead be inferred from behavioral measurements. Here, we review the general problem of inferring priors from psychophysical data, and the simple solution that follows from assuming a prior that is a Gaussian probability distribution. As our understanding of sensory processing advances, however, there is an increasing need for methods to flexibly recover the shape of Bayesian priors that are not well approximated by elementary functions. To address this issue, we describe a novel approach that applies to arbitrary prior shapes, which we parameterize using mixtures of Gaussian distributions. After incorporating a simple approximation, this method produces an analytical solution for psychophysical quantities that can be numerically optimized to recover the shapes of Bayesian priors. This approach offers advantages in flexibility, while still providing an analytical framework for many scenarios. We provide a MATLAB toolbox implementing key computations described herein.
Collapse
|
7
|
Lin CHS, Garrido MI. Towards a cross-level understanding of Bayesian inference in the brain. Neurosci Biobehav Rev 2022; 137:104649. [PMID: 35395333 DOI: 10.1016/j.neubiorev.2022.104649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 02/28/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Perception emerges from unconscious probabilistic inference, which guides behaviour in our ubiquitously uncertain environment. Bayesian decision theory is a prominent computational model that describes how people make rational decisions using noisy and ambiguous sensory observations. However, critical questions have been raised about the validity of the Bayesian framework in explaining the mental process of inference. Firstly, some natural behaviours deviate from Bayesian optimum. Secondly, the neural mechanisms that support Bayesian computations in the brain are yet to be understood. Taking Marr's cross level approach, we review the recent progress made in addressing these challenges. We first review studies that combined behavioural paradigms and modelling approaches to explain both optimal and suboptimal behaviours. Next, we evaluate the theoretical advances and the current evidence for ecologically feasible algorithms and neural implementations in the brain, which may enable probabilistic inference. We argue that this cross-level approach is necessary for the worthwhile pursuit to uncover mechanistic accounts of human behaviour.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia.
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia
| |
Collapse
|
8
|
Sohn H, Narain D. Neural implementations of Bayesian inference. Curr Opin Neurobiol 2021; 70:121-129. [PMID: 34678599 DOI: 10.1016/j.conb.2021.09.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 10/20/2022]
Abstract
Bayesian inference has emerged as a general framework that captures how organisms make decisions under uncertainty. Recent experimental findings reveal disparate mechanisms for how the brain generates behaviors predicted by normative Bayesian theories. Here, we identify two broad classes of neural implementations for Bayesian inference: a modular class, where each probabilistic component of Bayesian computation is independently encoded and a transform class, where uncertain measurements are converted to Bayesian estimates through latent processes. Many recent experimental neuroscience findings studying probabilistic inference broadly fall into these classes. We identify potential avenues for synthesis across these two classes and the disparities that, at present, cannot be reconciled. We conclude that to distinguish among implementation hypotheses for Bayesian inference, we require greater engagement among theoretical and experimental neuroscientists in an effort that spans different scales of analysis, circuits, tasks, and species.
Collapse
Affiliation(s)
- Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, 3015, CN, the Netherlands.
| |
Collapse
|