1
|
Duffy JS, Bellgrove MA, Murphy PR, O'Connell RG. Disentangling sources of variability in decision-making. Nat Rev Neurosci 2025; 26:247-262. [PMID: 40114010 DOI: 10.1038/s41583-025-00916-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/26/2025] [Indexed: 03/22/2025]
Abstract
Even the most highly-trained observers presented with identical choice-relevant stimuli will reliably exhibit substantial trial-to-trial variability in the timing and accuracy of their choices. Despite being a pervasive feature of choice behaviour and a prominent phenotype for numerous clinical disorders, the capability to disentangle the sources of such intra-individual variability (IIV) remains limited. In principle, computational models of decision-making offer a means of parsing and estimating these sources, but methodological limitations have prevented this potential from being fully realized in practice. In this Review, we first discuss current limitations of algorithmic models for understanding variability in decision-making behaviour. We then highlight recent advances in behavioural paradigm design, novel analyses of cross-trial behavioural and neural dynamics, and the development of neurally grounded computational models that are now making it possible to link distinct components of IIV to well-defined neural processes. Taken together, we demonstrate how these methods are opening up new avenues for systematically analysing the neural origins of IIV, paving the way for a more refined, holistic understanding of decision-making in health and disease.
Collapse
Affiliation(s)
- Jade S Duffy
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Mark A Bellgrove
- School of Psychological Sciences and Turner Institute for Brain and Mental Health, Monash University, Melbourne, Victoria, Australia
| | - Peter R Murphy
- Department of Psychology, Maynooth University, Kildare, Ireland
| | - Redmond G O'Connell
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, Dublin, Ireland.
| |
Collapse
|
2
|
Quinton JC, Gautheron F, Smeding A. Embodied sequential sampling models and dynamic neural fields for decision-making: Why hesitate between two when a continuum is the answer. Neural Netw 2024; 179:106526. [PMID: 39053301 DOI: 10.1016/j.neunet.2024.106526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 06/02/2024] [Accepted: 07/07/2024] [Indexed: 07/27/2024]
Abstract
As two alternative options in a forced choice task are separated by design, two classes of computational models of decision-making have thrived independently in the literature for nearly five decades. While sequential sampling models (SSM) focus on response times and keypresses in binary decisions in experimental paradigms, dynamic neural fields (DNF) focus on continuous sensorimotor dimensions and tasks found in perception and robotics. Recent attempts have been made to address limitations in their application to other domains, but strong similarities and compatibility between prominent models from both classes were hardly considered. This article is an attempt at bridging the gap between these classes of models, and simultaneously between disciplines and paradigms relying on binary or continuous responses. A unifying formulation of representative SSM and DNF equations is proposed, varying the number of units which interact and compete to reach a decision. The embodiment of decisions is also considered by coupling cognitive and sensorimotor processes, enabling the model to generate decision trajectories at trial level. The resulting mechanistic model is therefore able to target different paradigms (forced choices or continuous response scales) and measures (final responses or dynamics). The validity of the model is assessed statistically by fitting empirical distributions obtained from human participants in moral decision-making mouse-tracking tasks, for which both dichotomous and nuanced responses are meaningful. Comparing equations at the theoretical level, and model parametrizations at the empirical level, the implications for psychological decision-making processes, as well as the fundamental assumptions and limitations of models and paradigms are discussed.
Collapse
Affiliation(s)
| | - Flora Gautheron
- Univ. Grenoble Alpes, CNRS, Grenoble INP,(1) LJK, 38000 Grenoble, France; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, LIP/PC2S, 38000 Grenoble, France.
| | - Annique Smeding
- Univ. Savoie Mont Blanc, Univ. Grenoble Alpes, LIP/PC2S, 73000 Chambéry, France.
| |
Collapse
|
3
|
Cihak HL, Kilpatrick ZP. Robustly encoding certainty in a metastable neural circuit model. Phys Rev E 2024; 110:034404. [PMID: 39425424 PMCID: PMC11778249 DOI: 10.1103/physreve.110.034404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 08/19/2024] [Indexed: 10/21/2024]
Abstract
Localized persistent neural activity can encode delayed estimates of continuous variables. Common experiments require that subjects store and report the feature value (e.g., orientation) of a particular cue (e.g., oriented bar on a screen) after a delay. Visualizing recorded activity of neurons along their feature tuning reveals activity bumps whose centers wander stochastically, degrading the estimate over time. Bump position therefore represents the remembered estimate. Recent work suggests bump amplitude may represent estimate certainty reflecting a probabilistic population code for a Bayesian posterior. Idealized models of this type are fragile due to the fine tuning common to constructed continuum attractors in dynamical systems. Here we propose an alternative metastable model for robustly supporting multiple bump amplitudes by extending neural circuit models to include quantized nonlinearities. Asymptotic projections of circuit activity produce low-dimensional evolution equations for the amplitude and position of bump solutions in response to external stimuli and noise perturbations. Analysis of reduced equations accurately characterizes phase variance and the dynamics of amplitude transitions between stable discrete values. More salient cues generate bumps of higher amplitude which wander less, consistent with experiments showing certainty correlates with more accurate memories.
Collapse
Affiliation(s)
- Heather L. Cihak
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado 80309, USA
| | - Zachary P. Kilpatrick
- Department of Applied Mathematics, University of Colorado, Boulder, Colorado 80309, USA
| |
Collapse
|
4
|
Ibañez S, Sengupta N, Luebke JI, Wimmer K, Weaver CM. Myelin dystrophy impairs signal transmission and working memory in a multiscale model of the aging prefrontal cortex. eLife 2024; 12:RP90964. [PMID: 39028036 PMCID: PMC11259433 DOI: 10.7554/elife.90964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
Normal aging leads to myelin alterations in the rhesus monkey dorsolateral prefrontal cortex (dlPFC), which are positively correlated with degree of cognitive impairment. It is hypothesized that remyelination with shorter and thinner myelin sheaths partially compensates for myelin degradation, but computational modeling has not yet explored these two phenomena together systematically. Here, we used a two-pronged modeling approach to determine how age-related myelin changes affect a core cognitive function: spatial working memory. First, we built a multicompartment pyramidal neuron model fit to monkey dlPFC empirical data, with an axon including myelinated segments having paranodes, juxtaparanodes, internodes, and tight junctions. This model was used to quantify conduction velocity (CV) changes and action potential (AP) failures after demyelination and subsequent remyelination. Next, we incorporated the single neuron results into a spiking neural network model of working memory. While complete remyelination nearly recovered axonal transmission and network function to unperturbed levels, our models predict that biologically plausible levels of myelin dystrophy, if uncompensated by other factors, can account for substantial working memory impairment with aging. The present computational study unites empirical data from ultrastructure up to behavior during normal aging, and has broader implications for many demyelinating conditions, such as multiple sclerosis or schizophrenia.
Collapse
Affiliation(s)
- Sara Ibañez
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of MedicineBostonUnited States
- Centre de Recerca Matemàtica, Edifici C, Campus BellaterraBellaterraSpain
- Departament de Matemàtiques, Universitat Autònoma de Barcelona, Edifici CBellaterraSpain
| | - Nilapratim Sengupta
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of MedicineBostonUnited States
- Department of Mathematics, Franklin and Marshall CollegeLancasterUnited States
| | - Jennifer I Luebke
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of MedicineBostonUnited States
| | - Klaus Wimmer
- Centre de Recerca Matemàtica, Edifici C, Campus BellaterraBellaterraSpain
- Departament de Matemàtiques, Universitat Autònoma de Barcelona, Edifici CBellaterraSpain
| | - Christina M Weaver
- Department of Mathematics, Franklin and Marshall CollegeLancasterUnited States
| |
Collapse
|
5
|
Secer G, Knierim JJ, Cowan NJ. Continuous Bump Attractor Networks Require Explicit Error Coding for Gain Recalibration. RESEARCH SQUARE 2024:rs.3.rs-4209280. [PMID: 38699376 PMCID: PMC11065082 DOI: 10.21203/rs.3.rs-4209280/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
Representations of continuous variables are crucial to create internal models of the external world. A prevailing model of how the brain maintains these representations is given by continuous bump attractor networks (CBANs) in a broad range of brain functions across different areas, such as spatial navigation in hippocampal/entorhinal circuits and working memory in prefrontal cortex. Through recurrent connections, a CBAN maintains a persistent activity bump, whose peak location can vary along a neural space, corresponding to different values of a continuous variable. To track the value of a continuous variable changing over time, a CBAN updates the location of its activity bump based on inputs that encode the changes in the continuous variable (e.g., movement velocity in the case of spatial navigation)-a process akin to mathematical integration. This integration process is not perfect and accumulates error over time. For error correction, CBANs can use additional inputs providing ground-truth information about the continuous variable's correct value (e.g., visual landmarks for spatial navigation). These inputs enable the network dynamics to automatically correct any representation error. Recent experimental work on hippocampal place cells has shown that, beyond correcting errors, ground-truth inputs also fine-tune the gain of the integration process, a crucial factor that links the change in the continuous variable to the updating of the activity bump's location. However, existing CBAN models lack this plasticity, offering no insights into the neural mechanisms and representations involved in the recalibration of the integration gain. In this paper, we explore this gap by using a ring attractor network, a specific type of CBAN, to model the experimental conditions that demonstrated gain recalibration in hippocampal place cells. Our analysis reveals the necessary conditions for neural mechanisms behind gain recalibration within a CBAN. Unlike error correction, which occurs through network dynamics based on ground-truth inputs, gain recalibration requires an additional neural signal that explicitly encodes the error in the network's representation via a rate code. Finally, we propose a modified ring attractor network as an example CBAN model that verifies our theoretical findings. Combining an error-rate code with Hebbian synaptic plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.
Collapse
Affiliation(s)
- Gorkem Secer
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - James J Knierim
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Noah J Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
6
|
Secer G, Knierim JJ, Cowan NJ. Continuous Bump Attractor Networks Require Explicit Error Coding for Gain Recalibration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.12.579874. [PMID: 38562699 PMCID: PMC10983875 DOI: 10.1101/2024.02.12.579874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Representations of continuous variables are crucial to create internal models of the external world. A prevailing model of how the brain maintains these representations is given by continuous bump attractor networks (CBANs) in a broad range of brain functions across different areas, such as spatial navigation in hippocampal/entorhinal circuits and working memory in prefrontal cortex. Through recurrent connections, a CBAN maintains a persistent activity bump, whose peak location can vary along a neural space, corresponding to different values of a continuous variable. To track the value of a continuous variable changing over time, a CBAN updates the location of its activity bump based on inputs that encode the changes in the continuous variable (e.g., movement velocity in the case of spatial navigation)-a process akin to mathematical integration. This integration process is not perfect and accumulates error over time. For error correction, CBANs can use additional inputs providing ground-truth information about the continuous variable's correct value (e.g., visual landmarks for spatial navigation). These inputs enable the network dynamics to automatically correct any representation error. Recent experimental work on hippocampal place cells has shown that, beyond correcting errors, ground-truth inputs also fine-tune the gain of the integration process, a crucial factor that links the change in the continuous variable to the updating of the activity bump's location. However, existing CBAN models lack this plasticity, offering no insights into the neural mechanisms and representations involved in the recalibration of the integration gain. In this paper, we explore this gap by using a ring attractor network, a specific type of CBAN, to model the experimental conditions that demonstrated gain recalibration in hippocampal place cells. Our analysis reveals the necessary conditions for neural mechanisms behind gain recalibration within a CBAN. Unlike error correction, which occurs through network dynamics based on ground-truth inputs, gain recalibration requires an additional neural signal that explicitly encodes the error in the network's representation via a rate code. Finally, we propose a modified ring attractor network as an example CBAN model that verifies our theoretical findings. Combining an error-rate code with Hebbian synaptic plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.
Collapse
Affiliation(s)
- Gorkem Secer
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - James J Knierim
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Noah J Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
7
|
Cerracchio E, Miletić S, Forstmann BU. Modelling decision-making biases. Front Comput Neurosci 2023; 17:1222924. [PMID: 37927545 PMCID: PMC10622807 DOI: 10.3389/fncom.2023.1222924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 10/09/2023] [Indexed: 11/07/2023] Open
Abstract
Biases are a fundamental aspect of everyday life decision-making. A variety of modelling approaches have been suggested to capture decision-making biases. Statistical models are a means to describe the data, but the results are usually interpreted according to a verbal theory. This can lead to an ambiguous interpretation of the data. Mathematical cognitive models of decision-making outline the structure of the decision process with formal assumptions, providing advantages in terms of prediction, simulation, and interpretability compared to statistical models. We compare studies that used both signal detection theory and evidence accumulation models as models of decision-making biases, concluding that the latter provides a more comprehensive account of the decision-making phenomena by including response time behavior. We conclude by reviewing recent studies investigating attention and expectation biases with evidence accumulation models. Previous findings, reporting an exclusive influence of attention on the speed of evidence accumulation and prior probability on starting point, are challenged by novel results suggesting an additional effect of attention on non-decision time and prior probability on drift rate.
Collapse
Affiliation(s)
- Ettore Cerracchio
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | | | |
Collapse
|
8
|
Ma H, Qi Y, Gong P, Zhang J, Lu WL, Feng J. Self-Organization of Nonlinearly Coupled Neural Fluctuations Into Synergistic Population Codes. Neural Comput 2023; 35:1820-1849. [PMID: 37725705 DOI: 10.1162/neco_a_01612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 06/26/2023] [Indexed: 09/21/2023]
Abstract
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. In this study, we investigate the emergence of synergistic neural population codes from the intrinsic dynamics of correlated neural fluctuations in a neural circuit model capturing realistic nonlinear noise coupling of spiking neurons. We show that a rich repertoire of spatial correlation patterns naturally emerges in a bump attractor network and further reveals the dynamical regime under which the interplay between differential and noise correlations leads to synergistic codes. Moreover, we find that negative correlations may induce stable bound states between two bumps, a phenomenon previously unobserved in firing rate models. These noise-induced effects of bump attractors lead to a number of computational advantages including enhanced working memory capacity and efficient spatiotemporal multiplexing and can account for a range of cognitive and behavioral phenomena related to working memory. This study offers a dynamical approach to investigating realistic correlated neural fluctuations and insights to their roles in cortical computations.
Collapse
Affiliation(s)
- Hengyuan Ma
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
| | - Yang Qi
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China
| | - Pulin Gong
- School of Physics, University of Sydney, Sydney, NSW 2006, Australia
| | - Jie Zhang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China
| | - Wen-Lian Lu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China
| | - Jianfeng Feng
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, Ministry of Education, Shanghai 200433, China
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, U.K.
| |
Collapse
|
9
|
Ibañez S, Sengupta N, Luebke JI, Wimmer K, Weaver CM. Myelin dystrophy in the aging prefrontal cortex leads to impaired signal transmission and working memory decline: a multiscale computational study. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.30.555476. [PMID: 37693412 PMCID: PMC10491254 DOI: 10.1101/2023.08.30.555476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Normal aging leads to myelin alternations in the rhesus monkey dorsolateral prefrontal cortex (dlPFC), which are often correlated with cognitive impairment. It is hypothesized that remyelination with shorter and thinner myelin sheaths partially compensates for myelin degradation, but computational modeling has not yet explored these two phenomena together systematically. Here, we used a two-pronged modeling approach to determine how age-related myelin changes affect a core cognitive function: spatial working memory. First we built a multicompartment pyramidal neuron model fit to monkey dlPFC data, with axon including myelinated segments having paranodes, juxtaparanodes, internodes, and tight junctions, to quantify conduction velocity (CV) changes and action potential (AP) failures after demyelination and subsequent remyelination in a population of neurons. Lasso regression identified distinctive parameter sets likely to modulate an axon's susceptibility to CV changes following demyelination versus remyelination. Next we incorporated the single neuron results into a spiking neural network model of working memory. While complete remyelination nearly recovered axonal transmission and network function to unperturbed levels, our models predict that biologically plausible levels of myelin dystrophy, if uncompensated by other factors, can account for substantial working memory impairment with aging. The present computational study unites empirical data from electron microscopy up to behavior on aging, and has broader implications for many demyelinating conditions, such as multiple sclerosis or schizophrenia.
Collapse
Affiliation(s)
- Sara Ibañez
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA USA 02118
- Centre de Recerca Matemàtica, Edifici C, Campus Bellaterra, 08193 Bellaterra, Spain
| | - Nilapratim Sengupta
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA USA 02118
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, USA 17604
| | - Jennifer I Luebke
- Department of Anatomy & Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA USA 02118
| | - Klaus Wimmer
- Centre de Recerca Matemàtica, Edifici C, Campus Bellaterra, 08193 Bellaterra, Spain
| | - Christina M Weaver
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, USA 17604
| |
Collapse
|
10
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
11
|
Verbe A, Martinez D, Viollet S. Sensory fusion in the hoverfly righting reflex. Sci Rep 2023; 13:6138. [PMID: 37061548 PMCID: PMC10105705 DOI: 10.1038/s41598-023-33302-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/11/2023] [Indexed: 04/17/2023] Open
Abstract
We study how falling hoverflies use sensory cues to trigger appropriate roll righting behavior. Before being released in a free fall, flies were placed upside-down with their legs contacting the substrate. The prior leg proprioceptive information about their initial orientation sufficed for the flies to right themselves properly. However, flies also use visual and antennal cues to recover faster and disambiguate sensory conflicts. Surprisingly, in one of the experimental conditions tested, hoverflies flew upside-down while still actively flapping their wings. In all the other conditions, flies were able to right themselves using two roll dynamics: fast ([Formula: see text]50ms) and slow ([Formula: see text]110ms) in the presence of consistent and conflicting cues, respectively. These findings suggest that a nonlinear sensory integration of the three types of sensory cues occurred. A ring attractor model was developed and discussed to account for this cue integration process.
Collapse
Affiliation(s)
- Anna Verbe
- Aix-Marseille Université, CNRS, ISM, 13009, Marseille, France
- PNI, Princeton University, Washington Road, Princeton, NJ, 08540, USA
| | - Dominique Martinez
- Aix-Marseille Université, CNRS, ISM, 13009, Marseille, France
- Université de Lorraine, CNRS, LORIA, 54000, Nancy, France
| | | |
Collapse
|
12
|
Kutschireiter A, Basnak MA, Wilson RI, Drugowitsch J. Bayesian inference in ring attractor networks. Proc Natl Acad Sci U S A 2023; 120:e2210622120. [PMID: 36812206 PMCID: PMC9992764 DOI: 10.1073/pnas.2210622120] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 01/12/2023] [Indexed: 02/24/2023] Open
Abstract
Working memories are thought to be held in attractor networks in the brain. These attractors should keep track of the uncertainty associated with each memory, so as to weigh it properly against conflicting new evidence. However, conventional attractors do not represent uncertainty. Here, we show how uncertainty could be incorporated into an attractor, specifically a ring attractor that encodes head direction. First, we introduce a rigorous normative framework (the circular Kalman filter) for benchmarking the performance of a ring attractor under conditions of uncertainty. Next, we show that the recurrent connections within a conventional ring attractor can be retuned to match this benchmark. This allows the amplitude of network activity to grow in response to confirmatory evidence, while shrinking in response to poor-quality or strongly conflicting evidence. This "Bayesian ring attractor" performs near-optimal angular path integration and evidence accumulation. Indeed, we show that a Bayesian ring attractor is consistently more accurate than a conventional ring attractor. Moreover, near-optimal performance can be achieved without exact tuning of the network connections. Finally, we use large-scale connectome data to show that the network can achieve near-optimal performance even after we incorporate biological constraints. Our work demonstrates how attractors can implement a dynamic Bayesian inference algorithm in a biologically plausible manner, and it makes testable predictions with direct relevance to the head direction system as well as any neural system that tracks direction, orientation, or periodic rhythms.
Collapse
Affiliation(s)
| | | | - Rachel I. Wilson
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA02115
| |
Collapse
|
13
|
Esnaola-Acebes JM, Roxin A, Wimmer K. Flexible integration of continuous sensory evidence in perceptual estimation tasks. Proc Natl Acad Sci U S A 2022; 119:e2214441119. [PMID: 36322720 PMCID: PMC9659402 DOI: 10.1073/pnas.2214441119] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 10/05/2022] [Indexed: 11/07/2022] Open
Abstract
Temporal accumulation of evidence is crucial for making accurate judgments based on noisy or ambiguous sensory input. The integration process leading to categorical decisions is thought to rely on competition between neural populations, each encoding a discrete categorical choice. How recurrent neural circuits integrate evidence for continuous perceptual judgments is unknown. Here, we show that a continuous bump attractor network can integrate a circular feature, such as stimulus direction, nearly optimally. As required by optimal integration, the population activity of the network unfolds on a two-dimensional manifold, in which the position of the network's activity bump tracks the stimulus average, and, simultaneously, the bump amplitude tracks stimulus uncertainty. Moreover, the temporal weighting of sensory evidence by the network depends on the relative strength of the stimulus compared to the internally generated bump dynamics, yielding either early (primacy), uniform, or late (recency) weighting. The model can flexibly switch between these regimes by changing a single control parameter, the global excitatory drive. We show that this mechanism can quantitatively explain individual temporal weighting profiles of human observers, and we validate the model prediction that temporal weighting impacts reaction times. Our findings point to continuous attractor dynamics as a plausible neural mechanism underlying stimulus integration in perceptual estimation tasks.
Collapse
Affiliation(s)
- Jose M. Esnaola-Acebes
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| | - Alex Roxin
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| | - Klaus Wimmer
- Computational Neuroscience Group, Centre de Recerca Matemàtica, 08193 Bellaterra (Barcelona), Spain
| |
Collapse
|