1
|
Asabuki T, Fukai T. Predictive learning rules generate a cortical-like replay of probabilistic sensory experiences. eLife 2025; 13:RP92712. [PMID: 40522720 PMCID: PMC12169850 DOI: 10.7554/elife.92712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2025] Open
Abstract
The brain is thought to construct an optimal internal model representing the probabilistic structure of the environment accurately. Evidence suggests that spontaneous brain activity gives such a model by cycling through activity patterns evoked by previous sensory experiences with the experienced probabilities. The brain's spontaneous activity emerges from internally driven neural population dynamics. However, how cortical neural networks encode internal models into spontaneous activity is poorly understood. Recent computational and experimental studies suggest that a cortical neuron can implement complex computations, including predictive responses, through soma-dendrite interactions. Here, we show that a recurrent network of spiking neurons subject to the same predictive learning principle provides a novel mechanism to learn the spontaneous replay of probabilistic sensory experiences. In this network, the learning rules minimize probability mismatches between stimulus-evoked and internally driven activities in all excitatory and inhibitory neurons. This learning paradigm generates stimulus-specific cell assemblies that internally remember their activation probabilities using within-assembly recurrent connections. Our model contrasts previous models that encode the statistical structure of sensory experiences into Markovian transition patterns among cell assemblies. We demonstrate that the spontaneous activity of our model well replicates the behavioral biases of monkeys performing perceptual decision making. Our results suggest that interactions between intracellular processes and recurrent network dynamics are more crucial for learning cognitive behaviors than previously thought.
Collapse
Affiliation(s)
- Toshitake Asabuki
- Okinawa Institute of Science and Technology Graduate UniversityOkinawaJapan
- RIKEN Center for Brain Science, RIKEN ECL Research UnitWakoJapan
- RIKEN Pioneering Research Institute (PRI)WakoJapan
| | - Tomoki Fukai
- Okinawa Institute of Science and Technology Graduate UniversityOkinawaJapan
| |
Collapse
|
2
|
Voigts J, Kanitscheider I, Miller NJ, Toloza EHS, Newman JP, Fiete IR, Harnett MT. Spatial reasoning via recurrent neural dynamics in mouse retrosplenial cortex. Nat Neurosci 2025; 28:1293-1299. [PMID: 40481228 DOI: 10.1038/s41593-025-01944-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/04/2025] [Indexed: 06/11/2025]
Abstract
From visual perception to language, sensory stimuli change their meaning depending on previous experience. Recurrent neural dynamics can interpret stimuli based on externally cued context, but it is unknown whether they can compute and employ internal hypotheses to resolve ambiguities. Here we show that mouse retrosplenial cortex (RSC) can form several hypotheses over time and perform spatial reasoning through recurrent dynamics. In our task, mice navigated using ambiguous landmarks that are identified through their mutual spatial relationship, requiring sequential refinement of hypotheses. Neurons in RSC and in artificial neural networks encoded mixtures of hypotheses, location and sensory information, and were constrained by robust low-dimensional dynamics. RSC encoded hypotheses as locations in activity space with divergent trajectories for identical sensory inputs, enabling their correct interpretation. Our results indicate that interactions between internal hypotheses and external sensory data in recurrent circuits can provide a substrate for complex sequential cognitive reasoning.
Collapse
Affiliation(s)
- Jakob Voigts
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
- HHMI Janelia Research Campus, Ashburn, VA, USA.
| | | | - Nicholas J Miller
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Enrique H S Toloza
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Physics, MIT, Cambridge, MA, USA
| | - Jonathan P Newman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
- Open Ephys Inc., Atlanta, GA, USA
| | - Ila R Fiete
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
| | - Mark T Harnett
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
| |
Collapse
|
3
|
Dimakou A, Pezzulo G, Zangrossi A, Corbetta M. The predictive nature of spontaneous brain activity across scales and species. Neuron 2025; 113:1310-1332. [PMID: 40101720 DOI: 10.1016/j.neuron.2025.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 01/30/2025] [Accepted: 02/12/2025] [Indexed: 03/20/2025]
Abstract
Emerging research suggests the brain operates as a "prediction machine," continuously anticipating sensory, motor, and cognitive outcomes. Central to this capability is the brain's spontaneous activity-ongoing internal processes independent of external stimuli. Neuroimaging and computational studies support that this activity is integral to maintaining and refining mental models of our environment, body, and behaviors, akin to generative models in computation. During rest, spontaneous activity expands the variability of potential representations, enhancing the accuracy and adaptability of these models. When performing tasks, internal models direct brain regions to anticipate sensory and motor states, optimizing performance. This review synthesizes evidence from various species, from C. elegans to humans, highlighting three key aspects of spontaneous brain activity's role in prediction: the similarity between spontaneous and task-related activity, the encoding of behavioral and interoceptive priors, and the high metabolic cost of this activity, underscoring prediction as a fundamental function of brains across species.
Collapse
Affiliation(s)
- Anastasia Dimakou
- Padova Neuroscience Center, Padova, Italy; Veneto Institute of Molecular Medicine, VIMM, Padova, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Andrea Zangrossi
- Padova Neuroscience Center, Padova, Italy; Department of General Psychology, University of Padova, Padova, Italy
| | - Maurizio Corbetta
- Padova Neuroscience Center, Padova, Italy; Veneto Institute of Molecular Medicine, VIMM, Padova, Italy; Department of Neuroscience, University of Padova, Padova, Italy.
| |
Collapse
|
4
|
Shimazaki H. Neural coding: Foundational concepts, statistical formulations, and recent advances. Neurosci Res 2025; 214:75-80. [PMID: 40107457 DOI: 10.1016/j.neures.2025.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 03/06/2025] [Accepted: 03/10/2025] [Indexed: 03/22/2025]
Abstract
Neural coding refers to the processes by which external stimuli are translated into neural activity and represented in a manner that drives behavior. Research in this field aims to elucidate these processes by identifying the neural activity and mechanisms responsible for stimulus recognition and behavioral execution. This article provides a concise review of foundational studies and key concepts in neural coding, along with statistical formulations and recent advances in population coding research enabled by large-scale recordings.
Collapse
|
5
|
Mannella F, Pezzulo G. Transitive inference as probabilistic preference learning. Psychon Bull Rev 2025; 32:674-689. [PMID: 39438427 DOI: 10.3758/s13423-024-02600-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/30/2024] [Indexed: 10/25/2024]
Abstract
Transitive inference (TI) is a cognitive task that assesses an organism's ability to infer novel relations between items based on previously acquired knowledge. TI is known for exhibiting various behavioral and neural signatures, such as the serial position effect (SPE), symbolic distance effect (SDE), and the brain's capacity to maintain and merge separate ranking models. We propose a novel framework that casts TI as a probabilistic preference learning task, using one-parameter Mallows models. We present a series of simulations that highlight the effectiveness of our novel approach. We show that the Mallows ranking model natively reproduces SDE and SPE. Furthermore, extending the model using Bayesian selection showcases its capacity to generate and merge ranking hypotheses as pairs with connecting symbols. Finally, we employ neural networks to replicate Mallows models, demonstrating how this framework aligns with observed prefrontal neural activity during TI. Our innovative approach sheds new light on the nature of TI, emphasizing the potential of probabilistic preference learning for unraveling its underlying neural mechanisms.
Collapse
Affiliation(s)
- Francesco Mannella
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185, Rome, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185, Rome, Italy.
| |
Collapse
|
6
|
Hertäg L, Wilmes KA, Clopath C. Uncertainty estimation with prediction-error circuits. Nat Commun 2025; 16:3036. [PMID: 40155399 PMCID: PMC11953419 DOI: 10.1038/s41467-025-58311-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/17/2025] [Indexed: 04/01/2025] Open
Abstract
Neural circuits continuously integrate noisy sensory stimuli with predictions that often do not perfectly match, requiring the brain to combine these conflicting feedforward and feedback inputs according to their uncertainties. However, how the brain tracks both stimulus and prediction uncertainty remains unclear. Here, we show that a hierarchical prediction-error network can estimate both the sensory and prediction uncertainty with positive and negative prediction-error neurons. Consistent with prior hypotheses, we demonstrate that neural circuits rely more on predictions when sensory inputs are noisy and the environment is stable. By perturbing inhibitory interneurons within the prediction-error circuit, we reveal their role in uncertainty estimation and input weighting. Finally, we link our model to biased perception, showing how stimulus and prediction uncertainty contribute to the contraction bias.
Collapse
Affiliation(s)
- Loreen Hertäg
- Modeling of Cognitive Processes, TU Berlin, Berlin, Germany.
| | | | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK
| |
Collapse
|
7
|
Luthra S, Luor A, Tierney AT, Dick F, Holt LL. Statistical learning dynamically shapes auditory perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.09.09.612146. [PMID: 39314310 PMCID: PMC11418995 DOI: 10.1101/2024.09.09.612146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Humans and other animals use information about how likely it is for something to happen. The absolute and relative probability of an event influences a remarkable breadth of behaviors, from foraging for food to comprehending linguistic constructions -- even when these probabilities are learned implicitly. It is less clear how, and under what circumstances, statistical learning of simple probabilities might drive changes in perception and cognition. Here, across a series of 29 experiments, we probe listeners' sensitivity to task-irrelevant changes in the probability distribution of tones' acoustic frequency across tone-in-noise detection and tone duration decisions. We observe that the task-irrelevant frequency distribution influences the ability to detect a sound and the speed with which perceptual decisions about its duration are made. The shape of the probability distribution, its range, and a tone's relative position within that range impact observed patterns of suppression and enhancement of tone detection and decision making. Perceptual decisions are also modulated by a newly discovered perceptual bias, with lower frequencies in the distribution more often and more rapidly perceived as longer, and higher frequencies as shorter. Perception is sensitive to rapid distribution changes, but distributional learning from previous probability distributions also carries over. In fact, massed exposure to a single point along the dimension results in seemingly maladaptive loss of sensitivity - occurring entirely in the absence of feedback or reward - along a range of subsequently encountered frequencies. This points to a gain mechanism that suppresses sensitivity to regions along a perceptual dimension that are less likely to be encountered.
Collapse
Affiliation(s)
- Sahil Luthra
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Austin Luor
- Department of Psychology & Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712
| | - Adam T. Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, United Kingdom WC1E 7HX
| | - Frederic Dick
- Experimental Psychology, University College London, United Kingdom WC1E 6BT
| | - Lori L. Holt
- Department of Psychology & Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712
| |
Collapse
|
8
|
Lowet AS, Zheng Q, Meng M, Matias S, Drugowitsch J, Uchida N. An opponent striatal circuit for distributional reinforcement learning. Nature 2025; 639:717-726. [PMID: 39972123 PMCID: PMC12007193 DOI: 10.1038/s41586-024-08488-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 12/04/2024] [Indexed: 02/21/2025]
Abstract
Machine learning research has achieved large performance gains on a wide range of tasks by expanding the learning target from mean rewards to entire probability distributions of rewards-an approach known as distributional reinforcement learning (RL)1. The mesolimbic dopamine system is thought to underlie RL in the mammalian brain by updating a representation of mean value in the striatum2, but little is known about whether, where and how neurons in this circuit encode information about higher-order moments of reward distributions3. Here, to fill this gap, we used high-density probes (Neuropixels) to record striatal activity from mice performing a classical conditioning task in which reward mean, reward variance and stimulus identity were independently manipulated. In contrast to traditional RL accounts, we found robust evidence for abstract encoding of variance in the striatum. Chronic ablation of dopamine inputs disorganized these distributional representations in the striatum without interfering with mean value coding. Two-photon calcium imaging and optogenetics revealed that the two major classes of striatal medium spiny neurons-D1 and D2-contributed to this code by preferentially encoding the right and left tails of the reward distribution, respectively. We synthesize these findings into a new model of the striatum and mesolimbic dopamine that harnesses the opponency between D1 and D2 medium spiny neurons4-9 to reap the computational benefits of distributional RL.
Collapse
Affiliation(s)
- Adam S Lowet
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
- Program in Neuroscience, Harvard University, Boston, MA, USA
| | - Qiao Zheng
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Melissa Meng
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Jan Drugowitsch
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
9
|
Pjanovic V, Zavatone-Veth J, Masset P, Keemink S, Nardin M. Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.25.640158. [PMID: 40060526 PMCID: PMC11888369 DOI: 10.1101/2025.02.25.640158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/15/2025]
Abstract
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior effectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions-including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples-derived from noisy inputs-with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity "bump" representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Collapse
Affiliation(s)
- Vojko Pjanovic
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Jacob Zavatone-Veth
- Society of Fellows and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Paul Masset
- Department of Psychology, McGill University, Montréal QC, Canada
| | - Sander Keemink
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Netherlands
| | - Michele Nardin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| |
Collapse
|
10
|
Vékony T, Farkas BC, Brezóczki B, Mittner M, Csifcsák G, Simor P, Németh D. Mind wandering enhances statistical learning. iScience 2025; 28:111703. [PMID: 39906558 PMCID: PMC11791256 DOI: 10.1016/j.isci.2024.111703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 10/14/2024] [Accepted: 12/24/2024] [Indexed: 02/06/2025] Open
Abstract
The human brain spends 30-50% of its waking hours engaged in mind-wandering (MW), a common phenomenon in which individuals either spontaneously or deliberately shift their attention away from external tasks to task-unrelated internal thoughts. Despite the significant amount of time dedicated to MW, its underlying reasons remain unexplained. Our pre-registered study investigates the potential adaptive aspects of MW, particularly its role in predictive processes measured by statistical learning. We simultaneously assessed visuomotor task performance as well as the capability to extract probabilistic information from the environment while assessing task focus (on-task vs. MW). We found that MW was associated with enhanced extraction of hidden, but predictable patterns. This finding suggests that MW may have functional relevance in human cognition by shaping behavior and predictive processes. Overall, our results highlight the importance of considering the adaptive aspects of MW, and its potential to enhance certain fundamental cognitive abilities.
Collapse
Affiliation(s)
- Teodóra Vékony
- Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, INSERM, CRNS, Université Claude Bernard Lyon 1, 69500 Bron, France
- Gran Canaria Cognitive Research Center, Department of Education and Psychology, University of Atlántico Medio, 35017 Las Palmas de Gran Canaria, Spain
| | - Bence C. Farkas
- UVSQ, INSERM, CESP, Université Paris-Saclay, 94807 Villejuif, France
- Institut du Psychotraumatisme de l’Enfant et de l’Adolescent, Conseil Départemental Yvelines et Hauts-de-Seine et Centre Hospitalier des Versailles, 78000 Versailles, France
- Centre de Recherche en Épidémiologie et en Santé des Populations, INSERM U1018, Université Paris-Saclay, Université Versailles Saint-Quentin, 94807 Paris, France
| | - Bianka Brezóczki
- Doctoral School of Psychology, Eötvös Loránd University, 1064 Budapest, Hungary
- Institute of Psychology, Eötvös Loránd University, 1064 Budapest, Hungary
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, 1117 Budapest, Hungary
| | - Matthias Mittner
- Department of Psychology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Gábor Csifcsák
- Department of Psychology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Péter Simor
- Institute of Psychology, Eötvös Loránd University, 1064 Budapest, Hungary
- Institute of Behavioral Sciences, Semmelweis University, 1085 Budapest, Hungary
- IMéRA Institute for Advanced Studies of Aix-Marseille University, 13004 Marseille, France
| | - Dezső Németh
- Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, INSERM, CRNS, Université Claude Bernard Lyon 1, 69500 Bron, France
- Gran Canaria Cognitive Research Center, Department of Education and Psychology, University of Atlántico Medio, 35017 Las Palmas de Gran Canaria, Spain
- BML-NAP Research Group, Institute of Psychology, Eötvös Loránd University & Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, 1071 Budapest, Hungary
| |
Collapse
|
11
|
Axenie C. Antifragile control systems in neuronal processing: a sensorimotor perspective. BIOLOGICAL CYBERNETICS 2025; 119:7. [PMID: 39954086 PMCID: PMC11829851 DOI: 10.1007/s00422-025-01003-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 01/09/2025] [Indexed: 02/17/2025]
Abstract
The stability-robustness-resilience-adaptiveness continuum in neuronal processing follows a hierarchical structure that explains interactions and information processing among the different time scales. Interestingly, using "canonical" neuronal computational circuits, such as Homeostatic Activity Regulation, Winner-Take-All, and Hebbian Temporal Correlation Learning, one can extend the behavior spectrum towards antifragility. Cast already in both probability theory and dynamical systems, antifragility can explain and define the interesting interplay among neural circuits, found, for instance, in sensorimotor control in the face of uncertainty and volatility. This perspective proposes a new framework to analyze and describe closed-loop neuronal processing using principles of antifragility, targeting sensorimotor control. Our objective is two-fold. First, we introduce antifragile control as a conceptual framework to quantify closed-loop neuronal network behaviors that gain from uncertainty and volatility. Second, we introduce neuronal network design principles, opening the path to neuromorphic implementations and transfer to technical systems.
Collapse
Affiliation(s)
- Cristian Axenie
- Department of Computer Science and Center for Artificial Intelligence, Technische Hochschule Nürnberg Georg Simon Ohm, Keßlerplatz 12, 90489, Nuremberg, Germany.
| |
Collapse
|
12
|
Lakhera S, Herbert E, Gjorgjieva J. Modeling the Emergence of Circuit Organization and Function during Development. Cold Spring Harb Perspect Biol 2025; 17:a041511. [PMID: 38858072 PMCID: PMC11864115 DOI: 10.1101/cshperspect.a041511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
Developing neural circuits show unique patterns of spontaneous activity and structured network connectivity shaped by diverse activity-dependent plasticity mechanisms. Based on extensive experimental work characterizing patterns of spontaneous activity in different brain regions over development, theoretical and computational models have played an important role in delineating the generation and function of individual features of spontaneous activity and their role in the plasticity-driven formation of circuit connectivity. Here, we review recent modeling efforts that explore how the developing cortex and hippocampus generate spontaneous activity, focusing on specific connectivity profiles and the gradual strengthening of inhibition as the key drivers behind the observed developmental changes in spontaneous activity. We then discuss computational models that mechanistically explore how different plasticity mechanisms use this spontaneous activity to instruct the formation and refinement of circuit connectivity, from the formation of single neuron receptive fields to sensory feature maps and recurrent architectures. We end by highlighting several open challenges regarding the functional implications of the discussed circuit changes, wherein models could provide the missing step linking immature developmental and mature adult information processing capabilities.
Collapse
Affiliation(s)
- Shreya Lakhera
- School of Life Sciences, Technical University of Munich, 85354 Freising, Germany
| | - Elizabeth Herbert
- School of Life Sciences, Technical University of Munich, 85354 Freising, Germany
| | - Julijana Gjorgjieva
- School of Life Sciences, Technical University of Munich, 85354 Freising, Germany
| |
Collapse
|
13
|
Hou K, Zorzi M, Testolin A. Estimating the distribution of numerosity and non-numerical visual magnitudes in natural scenes using computer vision. PSYCHOLOGICAL RESEARCH 2024; 89:31. [PMID: 39625570 DOI: 10.1007/s00426-024-02064-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 11/16/2024] [Indexed: 03/04/2025]
Abstract
Humans share with many animal species the ability to perceive and approximately represent the number of objects in visual scenes. This ability improves throughout childhood, suggesting that learning and development play a key role in shaping our number sense. This hypothesis is further supported by computational investigations based on deep learning, which have shown that numerosity perception can spontaneously emerge in neural networks that learn the statistical structure of images with a varying number of items. However, neural network models are usually trained using synthetic datasets that might not faithfully reflect the statistical structure of natural environments, and there is also growing interest in using more ecological visual stimuli to investigate numerosity perception in humans. In this work, we exploit recent advances in computer vision algorithms to design and implement an original pipeline that can be used to estimate the distribution of numerosity and non-numerical magnitudes in large-scale datasets containing thousands of real images depicting objects in daily life situations. We show that in natural visual scenes the frequency of appearance of different numerosities follows a power law distribution. Moreover, we show that the correlational structure for numerosity and continuous magnitudes is stable across datasets and scene types (homogeneous vs. heterogeneous object sets). We suggest that considering such "ecological" pattern of covariance is important to understand the influence of non-numerical visual cues on numerosity judgements.
Collapse
Affiliation(s)
- Kuinan Hou
- Department of General Psychology, University of Padova, Padua, Italy
| | - Marco Zorzi
- Department of General Psychology, University of Padova, Padua, Italy
- IRCCS San Camillo Hospital, Lido, VE, Italy
| | - Alberto Testolin
- Department of General Psychology, University of Padova, Padua, Italy.
- Department of Mathematics, University of Padova, Padua, Italy.
| |
Collapse
|
14
|
Keysers C, Silani G, Gazzola V. Predictive coding for the actions and emotions of others and its deficits in autism spectrum disorders. Neurosci Biobehav Rev 2024; 167:105877. [PMID: 39260714 DOI: 10.1016/j.neubiorev.2024.105877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 08/22/2024] [Accepted: 09/05/2024] [Indexed: 09/13/2024]
Abstract
Traditionally, the neural basis of social perception has been studied by showing participants brief examples of the actions or emotions of others presented in randomized order to prevent participants from anticipating what others do and feel. This approach is optimal to isolate the importance of information flow from lower to higher cortical areas. The degree to which feedback connections and Bayesian hierarchical predictive coding contribute to how mammals process more complex social stimuli has been less explored, and will be the focus of this review. We illustrate paradigms that start to capture how participants predict the actions and emotions of others under more ecological conditions, and discuss the brain activity measurement methods suitable to reveal the importance of feedback connections in these predictions. Together, these efforts draw a richer picture of social cognition in which predictive coding and feedback connections play significant roles. We further discuss how the notion of predicting coding is influencing how we think of autism spectrum disorder.
Collapse
Affiliation(s)
- Christian Keysers
- Social Brain Lab, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Art and Sciences, Meibergdreef 47, Amsterdam 1105 BA, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands.
| | - Giorgia Silani
- Department of Clinical and Health Psychology, University of Vienna, Wien, Austria
| | - Valeria Gazzola
- Social Brain Lab, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Art and Sciences, Meibergdreef 47, Amsterdam 1105 BA, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
15
|
Geadah V, Barello G, Greenidge D, Charles AS, Pillow JW. Sparse-Coding Variational Autoencoders. Neural Comput 2024; 36:2571-2601. [PMID: 39383030 DOI: 10.1162/neco_a_01715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 05/28/2024] [Indexed: 10/11/2024]
Abstract
The sparse coding model posits that the visual system has evolved to efficiently code natural stimuli using a sparse set of features from an overcomplete dictionary. The original sparse coding model suffered from two key limitations; however: (1) computing the neural response to an image patch required minimizing a nonlinear objective function via recurrent dynamics and (2) fitting relied on approximate inference methods that ignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution inspired by the variational autoencoder (VAE) framework. We introduce the sparse coding variational autoencoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parameterized by a deep neural network. This recognition model provides a neurally plausible feedforward implementation for the mapping from image patches to neural activities and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of gaussian, and the decoder network is a linear projection instead of a deep network. We fit the SVAE to natural image data under different assumed prior distributions and show that it obtains higher test performance than previous fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway.
Collapse
Affiliation(s)
- Victor Geadah
- Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Gabriel Barello
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, U.S.A.
| | - Daniel Greenidge
- Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Adam S Charles
- Department of Biomedical Engineering, Department Center for Imaging Science, and Department Kavli Neuroscience Discovery Institute, Baltimore, MD 21218, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
16
|
DiBerardino PAV, Filipowicz ALS, Danckert J, Anderson B. Plinko: Eliciting beliefs to build better models of statistical learning and mental model updating. Br J Psychol 2024; 115:759-786. [PMID: 39096484 DOI: 10.1111/bjop.12724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 07/03/2024] [Indexed: 08/05/2024]
Abstract
Prior beliefs are central to Bayesian accounts of cognition, but many of these accounts do not directly measure priors. More specifically, initial states of belief heavily influence how new information is assumed to be utilized when updating a particular model. Despite this, prior and posterior beliefs are either inferred from sequential participant actions or elicited through impoverished means. We had participants to play a version of the game 'Plinko', to first elicit individual participant priors in a theoretically agnostic manner. Subsequent learning and updating of participant beliefs was then directly measured. We show that participants hold various priors that cluster around prototypical probability distributions that in turn influence learning. In follow-up studies, we show that participant priors are stable over time and that the ability to update beliefs is influenced by a simple environmental manipulation (i.e., a short break). These data reveal the importance of directly measuring participant beliefs rather than assuming or inferring them as has been widely done in the literature to date. The Plinko game provides a flexible and fecund means for examining statistical learning and mental model updating.
Collapse
Affiliation(s)
| | | | - James Danckert
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Britt Anderson
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
17
|
Castillo IO, Schrater P, Pitkow X. Control when confidence is costly. ARXIV 2024:arXiv:2406.14427v2. [PMID: 39575123 PMCID: PMC11581108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2024]
Abstract
We develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations, each misestimate the stability of the world. In all cases, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control.
Collapse
Affiliation(s)
| | - Paul Schrater
- Departments of Computer Science and Psychology, University of Minnesota, Minneapolis, MN 55455
| | - Xaq Pitkow
- Departments of Electrical and Computer Engineering and Computer Science, Rice University, Houston, TX 77005
- Neuroscience Institute and Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| |
Collapse
|
18
|
Oliviers G, Bogacz R, Meulemans A. Learning probability distributions of sensory inputs with Monte Carlo predictive coding. PLoS Comput Biol 2024; 20:e1012532. [PMID: 39475902 PMCID: PMC11524488 DOI: 10.1371/journal.pcbi.1012532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 10/01/2024] [Indexed: 11/02/2024] Open
Abstract
It has been suggested that the brain employs probabilistic generative models to optimally interpret sensory information. This hypothesis has been formalised in distinct frameworks, focusing on explaining separate phenomena. On one hand, classic predictive coding theory proposed how the probabilistic models can be learned by networks of neurons employing local synaptic plasticity. On the other hand, neural sampling theories have demonstrated how stochastic dynamics enable neural circuits to represent the posterior distributions of latent states of the environment. These frameworks were brought together by variational filtering that introduced neural sampling to predictive coding. Here, we consider a variant of variational filtering for static inputs, to which we refer as Monte Carlo predictive coding (MCPC). We demonstrate that the integration of predictive coding with neural sampling results in a neural network that learns precise generative models using local computation and plasticity. The neural dynamics of MCPC infer the posterior distributions of the latent states in the presence of sensory inputs, and can generate likely inputs in their absence. Furthermore, MCPC captures the experimental observations on the variability of neural activity during perceptual tasks. By combining predictive coding and neural sampling, MCPC can account for both sets of neural data that previously had been explained by these individual frameworks.
Collapse
Affiliation(s)
- Gaspard Oliviers
- MRC Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Rafal Bogacz
- MRC Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
19
|
Székely A, Török B, Kiss M, Janacsek K, Németh D, Orbán G. Identifying Transfer Learning in the Reshaping of Inductive Biases. Open Mind (Camb) 2024; 8:1107-1128. [PMID: 39296349 PMCID: PMC11410354 DOI: 10.1162/opmi_a_00158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 07/10/2024] [Indexed: 09/21/2024] Open
Abstract
Transfer learning, the reuse of newly acquired knowledge under novel circumstances, is a critical hallmark of human intelligence that has frequently been pitted against the capacities of artificial learning agents. Yet, the computations relevant to transfer learning have been little investigated in humans. The benefit of efficient inductive biases (meta-level constraints that shape learning, often referred as priors in the Bayesian learning approach), has been both theoretically and experimentally established. Efficiency of inductive biases depends on their capacity to generalize earlier experiences. We argue that successful transfer learning upon task acquisition is ensured by updating inductive biases and transfer of knowledge hinges upon capturing the structure of the task in the inductive bias that can be reused in novel tasks. To explore this, we trained participants on a non-trivial visual stimulus sequence task (Alternating Serial Response Times, ASRT); during the Training phase, participants were exposed to one specific sequence for multiple days, then on the Transfer phase, the sequence changed, while the underlying structure of the task remained the same. Our results show that beyond the acquisition of the stimulus sequence, our participants were also able to update their inductive biases. Acquisition of the new sequence was considerably sped up by earlier exposure but this enhancement was specific to individuals showing signatures of abandoning initial inductive biases. Enhancement of learning was reflected in the development of a new internal model. Additionally, our findings highlight the ability of participants to construct an inventory of internal models and alternate between them based on environmental demands. Further, investigation of the behavior during transfer revealed that it is the subjective internal model of individuals that can predict the transfer across tasks. Our results demonstrate that even imperfect learning in a challenging environment helps learning in a new context by reusing the subjective and partial knowledge about environmental regularities.
Collapse
Affiliation(s)
- Anna Székely
- Department of Computational Sciences, HUN-REN Wigner Research Centre for Physics, Konkoly-Thege Miklós út 29-33., H-1121, Budapest, Hungary
- Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | | | - Mariann Kiss
- Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Karolina Janacsek
- Centre for Thinking and Learning, Institute for Lifecourse Development, School of Human Sciences, Faculty of Education, Health and Human Sciences, University of Greenwich, Greenwich, SE10 9LS United Kingdom
- Institute of Psychology, Faculty of Education and Psychology, Eötvös Loránd University, 1071 Budapest, Damjanich u. 41-43, Hungary
| | - Dezső Németh
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, 69500, Bron, France
- NAP Research Group, Institute of Psychology, Eötvös Loránd University & Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, 1071, Hungary
- Department of Education and Psychology, Faculty of Social Sciences, University of Atlántico Medio, 35017, Las Palmas de Gran Canaria, Spain
| | - Gergő Orbán
- Department of Computational Sciences, HUN-REN Wigner Research Centre for Physics, Konkoly-Thege Miklós út 29-33., H-1121, Budapest, Hungary
| |
Collapse
|
20
|
Zhu Z, Qi Y, Lu W, Feng J. Learning to integrate parts for whole through correlated neural variability. PLoS Comput Biol 2024; 20:e1012401. [PMID: 39226329 PMCID: PMC11398653 DOI: 10.1371/journal.pcbi.1012401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 09/13/2024] [Accepted: 08/08/2024] [Indexed: 09/05/2024] Open
Abstract
Neural activity in the cortex exhibits a wide range of firing variability and rich correlation structures. Studies on neural coding indicate that correlated neural variability can influence the quality of neural codes, either beneficially or adversely. However, the mechanisms by which correlated neural variability is transformed and processed across neural populations to achieve meaningful computation remain largely unclear. Here we propose a theory of covariance computation with spiking neurons which offers a unifying perspective on neural representation and computation with correlated noise. We employ a recently proposed computational framework known as the moment neural network to resolve the nonlinear coupling of correlated neural variability with a task-driven approach to constructing neural network models for performing covariance-based perceptual tasks. In particular, we demonstrate how perceptual information initially encoded entirely within the covariance of upstream neurons' spiking activity can be passed, in a near-lossless manner, to the mean firing rate of downstream neurons, which in turn can be used to inform inference. The proposed theory of covariance computation addresses an important question of how the brain extracts perceptual information from noisy sensory stimuli to generate a stable perceptual whole and indicates a more direct role that correlated variability plays in cortical information processing.
Collapse
Affiliation(s)
- Zhichao Zhu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, China
| | - Yang Qi
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
| | - Wenlian Lu
- School of Mathematical Sciences, Fudan University, Shanghai, China
- Shanghai Center for Mathematical Sciences, Shanghai, China
- Shanghai Key Laboratory for Contemporary Applied Mathematics, Shanghai, China
- Key Laboratory of Mathematics for Nonlinear Science, Shanghai, China
| | - Jianfeng Feng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhangjiang Fudan International Innovation Center, Shanghai, China
| |
Collapse
|
21
|
El Rassi Y, Handjaras G, Perciballi C, Leo A, Papale P, Corbetta M, Ricciardi E, Betti V. A visual representation of the hand in the resting somatomotor regions of the human brain. Sci Rep 2024; 14:18298. [PMID: 39112629 PMCID: PMC11306329 DOI: 10.1038/s41598-024-69248-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 08/02/2024] [Indexed: 08/10/2024] Open
Abstract
Hand visibility affects motor control, perception, and attention, as visual information is integrated into an internal model of somatomotor control. Spontaneous brain activity, i.e., at rest, in the absence of an active task, is correlated among somatomotor regions that are jointly activated during motor tasks. Recent studies suggest that spontaneous activity patterns not only replay task activation patterns but also maintain a model of the body's and environment's statistical regularities (priors), which may be used to predict upcoming behavior. Here, we test whether spontaneous activity in the human somatomotor cortex as measured using fMRI is modulated by visual stimuli that display hands vs. non-hand stimuli and by the use/action they represent. A multivariate pattern analysis was performed to examine the similarity between spontaneous activity patterns and task-evoked patterns to the presentation of natural hands, robot hands, gloves, or control stimuli (food). In the left somatomotor cortex, we observed a stronger (multivoxel) spatial correlation between resting state activity and natural hand picture patterns compared to other stimuli. No task-rest similarity was found in the visual cortex. Spontaneous activity patterns in somatomotor brain regions code for the visual representation of human hands and their use.
Collapse
Affiliation(s)
- Yara El Rassi
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
| | | | | | - Andrea Leo
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
- Department of Translational Research and Advanced Technologies, In Medicine and Surgery - University of Pisa, 56126, Pisa, Italy
| | - Paolo Papale
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
- Department of Vision & Cognition, Netherlands Institute for Neuroscience (KNAW), Meibergdreef 47, 1105 BA, Amsterdam, The Netherlands
| | - Maurizio Corbetta
- Department of Neuroscience and Padova Neuroscience Center (PNC), University of Padua, 35131, Padua, Italy
- Venetian Institute of Molecular Medicine (VIMM), 35129, Padua, Italy
| | | | - Viviana Betti
- IRCCS Fondazione Santa Lucia, 00179, Rome, Italy.
- Department of Psychology, Sapienza University of Rome, 00185, Rome, Italy.
| |
Collapse
|
22
|
Malkin J, O'Donnell C, Houghton CJ, Aitchison L. Signatures of Bayesian inference emerge from energy-efficient synapses. eLife 2024; 12:RP92595. [PMID: 39106188 PMCID: PMC11302983 DOI: 10.7554/elife.92595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024] Open
Abstract
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANNs) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have (1) higher input firing rates and (2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy-efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
Collapse
Affiliation(s)
- James Malkin
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | - Cian O'Donnell
- Faculty of Engineering, University of BristolBristolUnited Kingdom
- Intelligent Systems Research Centre, School of Computing, Engineering, and Intelligent Systems, Ulster UniversityDerry/LondonderryUnited Kingdom
| | - Conor J Houghton
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | | |
Collapse
|
23
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
24
|
Fischer BJ, Shadron K, Ferger R, Peña JL. Single trial Bayesian inference by population vector readout in the barn owl's sound localization system. PLoS One 2024; 19:e0303843. [PMID: 38771860 PMCID: PMC11108143 DOI: 10.1371/journal.pone.0303843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 05/01/2024] [Indexed: 05/23/2024] Open
Abstract
Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.
Collapse
Affiliation(s)
- Brian J. Fischer
- Department of Mathematics, Seattle University, Seattle, Washington, United States of America
| | - Keanu Shadron
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Roland Ferger
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - José L. Peña
- Dominick P Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
25
|
Fu J, Shrinivasan S, Baroni L, Ding Z, Fahey PG, Pierzchlewicz P, Ponder K, Froebe R, Ntanavara L, Muhammad T, Willeke KF, Wang E, Ding Z, Tran DT, Papadopoulos S, Patel S, Reimer J, Ecker AS, Pitkow X, Antolik J, Sinz FH, Haefner RM, Tolias AS, Franke K. Pattern completion and disruption characterize contextual modulation in the visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.13.532473. [PMID: 36993321 PMCID: PMC10054952 DOI: 10.1101/2023.03.13.532473] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Vision is fundamentally context-dependent, with neuronal responses influenced not just by local features but also by surrounding contextual information. In the visual cortex, studies using simple grating stimuli indicate that congruent stimuli - where the center and surround share the same orientation - are more inhibitory than when orientations are orthogonal, potentially serving redundancy reduction and predictive coding. Understanding these center-surround interactions in relation to natural image statistics is challenging due to the high dimensionality of the stimulus space, yet crucial for deciphering the neuronal code of real-world sensory processing. Utilizing large-scale recordings from mouse V1, we trained convolutional neural networks (CNNs) to predict and synthesize surround patterns that either optimally suppressed or enhanced responses to center stimuli, confirmed by in vivo experiments. Contrary to the notion that congruent stimuli are suppressive, we found that surrounds that completed patterns based on natural image statistics were facilitatory, while disruptive surrounds were suppressive. Applying our CNN image synthesis method in macaque V1, we discovered that pattern completion within the near surround occurred more frequently with excitatory than with inhibitory surrounds, suggesting that our results in mice are conserved in macaques. Further, experiments and model analyses confirmed previous studies reporting the opposite effect with grating stimuli in both species. Using the MICrONS functional connectomics dataset, we observed that neurons with similar feature selectivity formed excitatory connections regardless of their receptive field overlap, aligning with the pattern completion phenomenon observed for excitatory surrounds. Finally, our empirical results emerged in a normative model of perception implementing Bayesian inference, where neuronal responses are modulated by prior knowledge of natural scene statistics. In summary, our findings identify a novel relationship between contextual information and natural scene statistics and provide evidence for a role of contextual modulation in hierarchical inference.
Collapse
|
26
|
Goris RLT, Coen-Cagli R, Miller KD, Priebe NJ, Lengyel M. Response sub-additivity and variability quenching in visual cortex. Nat Rev Neurosci 2024; 25:237-252. [PMID: 38374462 PMCID: PMC11444047 DOI: 10.1038/s41583-024-00795-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/21/2024]
Abstract
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Dept. of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
- Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Swartz Program in Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Nicholas J Priebe
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
27
|
Pezzulo G, D'Amato L, Mannella F, Priorelli M, Van de Maele T, Stoianov IP, Friston K. Neural representation in active inference: Using generative models to interact with-and understand-the lived world. Ann N Y Acad Sci 2024; 1534:45-68. [PMID: 38528782 DOI: 10.1111/nyas.15118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
This paper considers neural representation through the lens of active inference, a normative framework for understanding brain function. It delves into how living organisms employ generative models to minimize the discrepancy between predictions and observations (as scored with variational free energy). The ensuing analysis suggests that the brain learns generative models to navigate the world adaptively, not (or not solely) to understand it. Different living organisms may possess an array of generative models, spanning from those that support action-perception cycles to those that underwrite planning and imagination; namely, from explicit models that entail variables for predicting concurrent sensations, like objects, faces, or people-to action-oriented models that predict action outcomes. It then elucidates how generative models and belief dynamics might link to neural representation and the implications of different types of generative models for understanding an agent's cognitive capabilities in relation to its ecological niche. The paper concludes with open questions regarding the evolution of generative models and the development of advanced cognitive abilities-and the gradual transition from pragmatic to detached neural representations. The analysis on offer foregrounds the diverse roles that generative models play in cognitive processes and the evolution of neural representation.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Leo D'Amato
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
- Polytechnic University of Turin, Turin, Italy
| | - Francesco Mannella
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Matteo Priorelli
- Institute of Cognitive Sciences and Technologies, National Research Council, Padua, Italy
| | - Toon Van de Maele
- IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium
| | - Ivilin Peev Stoianov
- Institute of Cognitive Sciences and Technologies, National Research Council, Padua, Italy
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London, UK
- VERSES Research Lab, Los Angeles, California, USA
| |
Collapse
|
28
|
Pan X, Coen-Cagli R, Schwartz O. Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks. Neural Comput 2024; 36:621-644. [PMID: 38457752 PMCID: PMC11164410 DOI: 10.1162/neco_a_01652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 12/04/2023] [Indexed: 03/10/2024]
Abstract
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases. We found that trial-by-trial correlation between neurons (i.e., noise correlation) is positive and low dimensional. Neurons that are close in a feature map have larger noise correlation. These properties are surprisingly similar to the findings in the visual cortex. We further analyzed the alignment of the main axes of the covariance matrix. We found that different images share a common trial-by-trial noise covariance subspace, and they are aligned with the global signal covariance. This evidence that the noise covariance is aligned with signal covariance suggests that noise covariance in dropout neural networks reduces network accuracy, which we further verified directly with a trial-shuffling procedure commonly used in neuroscience. These findings highlight a previously overlooked aspect of dropout layers that can affect network performance. Such dropout networks could also potentially be a computational model of neural variability.
Collapse
Affiliation(s)
- Xu Pan
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Dominick Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY 10461, U.S.A.
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| |
Collapse
|
29
|
Maggi S, Hock RM, O'Neill M, Buckley M, Moran PM, Bast T, Sami M, Humphries MD. Tracking subjects' strategies in behavioural choice experiments at trial resolution. eLife 2024; 13:e86491. [PMID: 38426402 PMCID: PMC10959529 DOI: 10.7554/elife.86491] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 02/23/2024] [Indexed: 03/02/2024] Open
Abstract
Investigating how, when, and what subjects learn during decision-making tasks requires tracking their choice strategies on a trial-by-trial basis. Here, we present a simple but effective probabilistic approach to tracking choice strategies at trial resolution using Bayesian evidence accumulation. We show this approach identifies both successful learning and the exploratory strategies used in decision tasks performed by humans, non-human primates, rats, and synthetic agents. Both when subjects learn and when rules change the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that subjects have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.
Collapse
Affiliation(s)
- Silvia Maggi
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| | - Rebecca M Hock
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| | - Martin O'Neill
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Health & Nutritional Sciences, Atlantic Technological UniversitySligoIreland
| | - Mark Buckley
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| | - Paula M Moran
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Neuroscience, University of NottinghamNottinghamUnited Kingdom
| | - Tobias Bast
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
- Department of Neuroscience, University of NottinghamNottinghamUnited Kingdom
| | - Musa Sami
- Institute of Mental Health, University of NottinghamNottinghamUnited Kingdom
| | - Mark D Humphries
- School of Psychology, University of NottinghamNottinghamUnited Kingdom
| |
Collapse
|
30
|
Zhu JQ, Sundh J, Spicer J, Chater N, Sanborn AN. The autocorrelated Bayesian sampler: A rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times. Psychol Rev 2024; 131:456-493. [PMID: 37289507 PMCID: PMC11115360 DOI: 10.1037/rev0000427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 01/23/2023] [Accepted: 02/16/2023] [Indexed: 06/10/2023]
Abstract
Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the "Bayesian brain" operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Jake Spicer
- Department of Psychology, University of Warwick
| | - Nick Chater
- Warwick Business School, University of Warwick
| | | |
Collapse
|
31
|
Ryu J, Lee SH. Bounded contribution of human early visual cortex to the topographic anisotropy in spatial extent perception. Commun Biol 2024; 7:178. [PMID: 38351283 PMCID: PMC10864322 DOI: 10.1038/s42003-024-05846-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 01/23/2024] [Indexed: 02/16/2024] Open
Abstract
To interact successfully with objects, it is crucial to accurately perceive their spatial extent, an enclosed region they occupy in space. Although the topographic representation of space in the early visual cortex (EVC) has been favored as a neural correlate of spatial extent perception, its exact nature and contribution to perception remain unclear. Here, we inspect the topographic representations of human individuals' EVC and perception in terms of how much their anisotropy is influenced by the orientation (co-axiality) and radial position (radiality) of stimuli. We report that while the anisotropy is influenced by both factors, its direction is primarily determined by radiality in EVC but by co-axiality in perception. Despite this mismatch, the individual differences in both radial and co-axial anisotropy are substantially shared between EVC and perception. Our findings suggest that spatial extent perception builds on EVC's spatial representation but requires an additional mechanism to transform its topographic bias.
Collapse
Affiliation(s)
- Juhyoung Ryu
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, 08826, Republic of Korea
| | - Sang-Hun Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, 08826, Republic of Korea.
| |
Collapse
|
32
|
Jellinek S, Fiser J. Neural correlates tracking different aspects of the emerging representation of novel visual categories. Cereb Cortex 2024; 34:bhad544. [PMID: 38236744 PMCID: PMC10839850 DOI: 10.1093/cercor/bhad544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 12/22/2023] [Accepted: 12/24/2023] [Indexed: 02/06/2024] Open
Abstract
Current studies investigating electroencephalogram correlates associated with categorization of sensory stimuli (P300 event-related potential, alpha event-related desynchronization, theta event-related synchronization) typically use an oddball paradigm with few, familiar, highly distinct stimuli providing limited insight about the aspects of categorization (e.g. difficulty, membership, uncertainty) that the correlates are linked to. Using a more complex task, we investigated whether such more specific links could be established between correlates and learning and how these links change during the emergence of new categories. In our study, participants learned to categorize novel stimuli varying continuously on multiple integral feature dimensions, while electroencephalogram was recorded from the beginning of the learning process. While there was no significant P300 event-related potential modulation, both alpha event-related desynchronization and theta event-related synchronization followed a characteristic trajectory in proportion with the gradual acquisition of the two categories. Moreover, the two correlates were modulated by different aspects of categorization, alpha event-related desynchronization by the difficulty of the task, whereas the magnitude of theta -related synchronization by the identity and possibly the strength of category membership. Thus, neural signals commonly related to categorization are appropriate for tracking both the dynamic emergence of internal representation of categories, and different meaningful aspects of the categorization process.
Collapse
Affiliation(s)
- Sára Jellinek
- Department of Cognitive Science, Central European University, Quellenstraße 51-55, 1100 Vienna, Austria
- Center for Cognitive Computation, Central European University, Quellenstraße 51-55, 1100 Vienna, Austria
| | - József Fiser
- Department of Cognitive Science, Central European University, Quellenstraße 51-55, 1100 Vienna, Austria
- Center for Cognitive Computation, Central European University, Quellenstraße 51-55, 1100 Vienna, Austria
| |
Collapse
|
33
|
Abstract
Determining the psychological, computational, and neural bases of confidence and uncertainty holds promise for understanding foundational aspects of human metacognition. While a neuroscience of confidence has focused on the mechanisms underpinning subpersonal phenomena such as representations of uncertainty in the visual or motor system, metacognition research has been concerned with personal-level beliefs and knowledge about self-performance. I provide a road map for bridging this divide by focusing on a particular class of confidence computation: propositional confidence in one's own (hypothetical) decisions or actions. Propositional confidence is informed by the observer's models of the world and their cognitive system, which may be more or less accurate-thus explaining why metacognitive judgments are inferential and sometimes diverge from task performance. Disparate findings on the neural basis of uncertainty and performance monitoring are integrated into a common framework, and a new understanding of the locus of action of metacognitive interventions is developed.
Collapse
Affiliation(s)
- Stephen M Fleming
- Department of Experimental Psychology, Wellcome Centre for Human Neuroimaging, and Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom;
| |
Collapse
|
34
|
Peters B, DiCarlo JJ, Gureckis T, Haefner R, Isik L, Tenenbaum J, Konkle T, Naselaris T, Stachenfeld K, Tavares Z, Tsao D, Yildirim I, Kriegeskorte N. How does the primate brain combine generative and discriminative computations in vision? ARXIV 2024:arXiv:2401.06005v1. [PMID: 38259351 PMCID: PMC10802669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
Collapse
Affiliation(s)
- Benjamin Peters
- Zuckerman Mind Brain Behavior Institute, Columbia University
- School of Psychology & Neuroscience, University of Glasgow
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Quest for Intelligence, Schwarzman College of Computing, MIT
| | | | - Ralf Haefner
- Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University
| | - Joshua Tenenbaum
- Department of Brain and Cognitive Sciences, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Natural and Artificial Intelligence, Harvard University
| | | | | | - Zenna Tavares
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Data Science Institute, Columbia University
| | - Doris Tsao
- Dept of Molecular & Cell Biology, University of California Berkeley
- Howard Hughes Medical Institute
| | - Ilker Yildirim
- Department of Psychology, Yale University
- Department of Statistics and Data Science, Yale University
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Department of Psychology, Columbia University
- Department of Neuroscience, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
35
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
36
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
37
|
Zhang WH, Wu S, Josić K, Doiron B. Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons. Nat Commun 2023; 14:7074. [PMID: 37925497 PMCID: PMC10625605 DOI: 10.1038/s41467-023-41743-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 09/15/2023] [Indexed: 11/06/2023] Open
Abstract
Two facts about cortex are widely accepted: neuronal responses show large spiking variability with near Poisson statistics and cortical circuits feature abundant recurrent connections between neurons. How these spiking and circuit properties combine to support sensory representation and information processing is not well understood. We build a theoretical framework showing that these two ubiquitous features of cortex combine to produce optimal sampling-based Bayesian inference. Recurrent connections store an internal model of the external world, and Poissonian variability of spike responses drives flexible sampling from the posterior stimulus distributions obtained by combining feedforward and recurrent neuronal inputs. We illustrate how this framework for sampling-based inference can be used by cortex to represent latent multivariate stimuli organized either hierarchically or in parallel. A neural signature of such network sampling are internally generated differential correlations whose amplitude is determined by the prior stored in the circuit, which provides an experimentally testable prediction for our framework.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Si Wu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China
- Center of Quantitative Biology, Peking University, Beijing, 100871, China
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, USA.
- Department of Biology and Biochemistry, University of Houston, Houston, TX, USA.
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
38
|
Walker EY, Pohl S, Denison RN, Barack DL, Lee J, Block N, Ma WJ, Meyniel F. Studying the neural representations of uncertainty. Nat Neurosci 2023; 26:1857-1867. [PMID: 37814025 DOI: 10.1038/s41593-023-01444-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/30/2023] [Indexed: 10/11/2023]
Abstract
The study of the brain's representations of uncertainty is a central topic in neuroscience. Unlike most quantities of which the neural representation is studied, uncertainty is a property of an observer's beliefs about the world, which poses specific methodological challenges. We analyze how the literature on the neural representations of uncertainty addresses those challenges and distinguish between 'code-driven' and 'correlational' approaches. Code-driven approaches make assumptions about the neural code for representing world states and the associated uncertainty. By contrast, correlational approaches search for relationships between uncertainty and neural activity without constraints on the neural representation of the world state that this uncertainty accompanies. To compare these two approaches, we apply several criteria for neural representations: sensitivity, specificity, invariance and functionality. Our analysis reveals that the two approaches lead to different but complementary findings, shaping new research questions and guiding future experiments.
Collapse
Affiliation(s)
- Edgar Y Walker
- Department of Physiology and Biophysics, Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - Stephan Pohl
- Department of Philosophy, New York University, New York, NY, USA
| | - Rachel N Denison
- Department of Psychological & Brain Sciences, Boston University, Boston, MA, USA
| | - David L Barack
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Department of Philosophy, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Lee
- Center for Neural Science, New York University, New York, NY, USA
| | - Ned Block
- Department of Philosophy, New York University, New York, NY, USA
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif-sur-Yvette, France.
| |
Collapse
|
39
|
Zavatone-Veth JA, Masset P, Tong WL, Zak JD, Murthy VN, Pehlevan C. Neural Circuits for Fast Poisson Compressed Sensing in the Olfactory Bulb. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.21.545947. [PMID: 37961548 PMCID: PMC10634677 DOI: 10.1101/2023.06.21.545947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Within a single sniff, the mammalian olfactory system can decode the identity and concentration of odorants wafted on turbulent plumes of air. Yet, it must do so given access only to the noisy, dimensionally-reduced representation of the odor world provided by olfactory receptor neurons. As a result, the olfactory system must solve a compressed sensing problem, relying on the fact that only a handful of the millions of possible odorants are present in a given scene. Inspired by this principle, past works have proposed normative compressed sensing models for olfactory decoding. However, these models have not captured the unique anatomy and physiology of the olfactory bulb, nor have they shown that sensing can be achieved within the 100-millisecond timescale of a single sniff. Here, we propose a rate-based Poisson compressed sensing circuit model for the olfactory bulb. This model maps onto the neuron classes of the olfactory bulb, and recapitulates salient features of their connectivity and physiology. For circuit sizes comparable to the human olfactory bulb, we show that this model can accurately detect tens of odors within the timescale of a single sniff. We also show that this model can perform Bayesian posterior sampling for accurate uncertainty estimation. Fast inference is possible only if the geometry of the neural code is chosen to match receptor properties, yielding a distributed neural code that is not axis-aligned to individual odor identities. Our results illustrate how normative modeling can help us map function onto specific neural circuits to generate new hypotheses.
Collapse
Affiliation(s)
- Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - William L Tong
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University Cambridge, MA 02138
| | - Joseph D Zak
- Department of Biological Sciences, University of Illinois at Chicago Chicago, IL 60607
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University Cambridge, MA 02138
| |
Collapse
|
40
|
Modirshanechi A, Becker S, Brea J, Gerstner W. Surprise and novelty in the brain. Curr Opin Neurobiol 2023; 82:102758. [PMID: 37619425 DOI: 10.1016/j.conb.2023.102758] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/30/2023] [Accepted: 07/20/2023] [Indexed: 08/26/2023]
Abstract
Notions of surprise and novelty have been used in various experimental and theoretical studies across multiple brain areas and species. However, 'surprise' and 'novelty' refer to different quantities in different studies, which raises concerns about whether these studies indeed relate to the same functionalities and mechanisms in the brain. Here, we address these concerns through a systematic investigation of how different aspects of surprise and novelty relate to different brain functions and physiological signals. We review recent classifications of definitions proposed for surprise and novelty along with links to experimental observations. We show that computational modeling and quantifiable definitions enable novel interpretations of previous findings and form a foundation for future theoretical and experimental studies.
Collapse
Affiliation(s)
- Alireza Modirshanechi
- Brain-Mind Institute, School of Life Sciences, EPFL, Lausanne, Switzerland; School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland.
| | - Sophia Becker
- Brain-Mind Institute, School of Life Sciences, EPFL, Lausanne, Switzerland; School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland. https://twitter.com/sophiabecker95
| | - Johanni Brea
- Brain-Mind Institute, School of Life Sciences, EPFL, Lausanne, Switzerland; School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain-Mind Institute, School of Life Sciences, EPFL, Lausanne, Switzerland; School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland.
| |
Collapse
|
41
|
Peng XR, Bundil I, Schulreich S, Li SC. Neural correlates of valence-dependent belief and value updating during uncertainty reduction: An fNIRS study. Neuroimage 2023; 279:120327. [PMID: 37582418 DOI: 10.1016/j.neuroimage.2023.120327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 08/07/2023] [Accepted: 08/11/2023] [Indexed: 08/17/2023] Open
Abstract
Selective use of new information is crucial for adaptive decision-making. Combining a gamble bidding task with assessing cortical responses using functional near-infrared spectroscopy (fNIRS), we investigated potential effects of information valence on behavioral and neural processes of belief and value updating during uncertainty reduction in young adults. By modeling changes in the participants' expressed subjective values using a Bayesian model, we dissociated processes of (i) updating beliefs about statistical properties of the gamble, (ii) updating values of a gamble based on new information about its winning probabilities, as well as (iii) expectancy violation. The results showed that participants used new information to update their beliefs and values about the gambles in a quasi-optimal manner, as reflected in the selective updating only in situations with reducible uncertainty. Furthermore, their updating was valence-dependent: information indicating an increase in winning probability was underweighted, whereas information about a decrease in winning probability was updated in good agreement with predictions of the Bayesian decision theory. Results of model-based and moderation analyses showed that this valence-dependent asymmetry was associated with a distinct contribution of expectancy violation, besides belief updating, to value updating after experiencing new positive information regarding winning probabilities. In line with the behavioral results, we replicated previous findings showing involvements of frontoparietal brain regions in the different components of updating. Furthermore, this study provided novel results suggesting a valence-dependent recruitment of brain regions. Individuals with stronger oxyhemoglobin responses during value updating was more in line with predictions of the Bayesian model while integrating new information that indicates an increase in winning probability. Taken together, this study provides first results showing expectancy violation as a contributing factor to sub-optimal valence-dependent updating during uncertainty reduction and suggests limitations of normative Bayesian decision theory.
Collapse
Affiliation(s)
- Xue-Rui Peng
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, Dresden, Germany.
| | - Indra Bundil
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Stefan Schulreich
- Department of Nutritional Sciences, Faculty of Life Sciences, University of Vienna, Vienna, Austria; Department of Cognitive Psychology, Faculty of Psychology and Human Movement Science, Universität Hamburg, Hamburg, Germany
| | - Shu-Chen Li
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
42
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
43
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
44
|
Gugel ZV, Maurais EG, Hong EJ. Chronic exposure to odors at naturally occurring concentrations triggers limited plasticity in early stages of Drosophila olfactory processing. eLife 2023; 12:e85443. [PMID: 37195027 PMCID: PMC10229125 DOI: 10.7554/elife.85443] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 02/06/2023] [Indexed: 05/18/2023] Open
Abstract
In insects and mammals, olfactory experience in early life alters olfactory behavior and function in later life. In the vinegar fly Drosophila, flies chronically exposed to a high concentration of a monomolecular odor exhibit reduced behavioral aversion to the familiar odor when it is reencountered. This change in olfactory behavior has been attributed to selective decreases in the sensitivity of second-order olfactory projection neurons (PNs) in the antennal lobe that respond to the overrepresented odor. However, since odorant compounds do not occur at similarly high concentrations in natural sources, the role of odor experience-dependent plasticity in natural environments is unclear. Here, we investigated olfactory plasticity in the antennal lobe of flies chronically exposed to odors at concentrations that are typically encountered in natural odor sources. These stimuli were chosen to each strongly and selectively excite a single class of primary olfactory receptor neuron (ORN), thus facilitating a rigorous assessment of the selectivity of olfactory plasticity for PNs directly excited by overrepresented stimuli. Unexpectedly, we found that chronic exposure to three such odors did not result in decreased PN sensitivity but rather mildly increased responses to weak stimuli in most PN types. Odor-evoked PN activity in response to stronger stimuli was mostly unaffected by odor experience. When present, plasticity was observed broadly in multiple PN types and thus was not selective for PNs receiving direct input from the chronically active ORNs. We further investigated the DL5 olfactory coding channel and found that chronic odor-mediated excitation of its input ORNs did not affect PN intrinsic properties, local inhibitory innervation, ORN responses or ORN-PN synaptic strength; however, broad-acting lateral excitation evoked by some odors was increased. These results show that PN odor coding is only mildly affected by strong persistent activation of a single olfactory input, highlighting the stability of early stages of insect olfactory processing to significant perturbations in the sensory environment.
Collapse
Affiliation(s)
- Zhannetta V Gugel
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadenaUnited States
| | - Elizabeth G Maurais
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadenaUnited States
| | - Elizabeth J Hong
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadenaUnited States
| |
Collapse
|
45
|
Qian Q, Lu M, Sun D, Wang A, Zhang M. Rewards weaken cross-modal inhibition of return with visual targets. Perception 2023; 52:400-411. [PMID: 37186788 DOI: 10.1177/03010066231175016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Previous studies have shown that rewards weaken visual inhibition of return (IOR). However, the specific mechanisms underlying the influence of rewards on cross-modal IOR remain unclear. Based on the Posner exogenous cue-target paradigm, the present study was conducted to investigate the effect of rewards on exogenous spatial cross-modal IOR in both visual cue with auditory target (VA) and auditory cue with visual target (AV) conditions. The results showed the following: in the AV condition, the IOR effect size in the high-reward condition was significantly lower than that in the low-reward condition. However, in the VA condition, there was no significant IOR in either the high- or low-reward condition and there was no significant difference between the two conditions. In other words, the use of rewards modulated exogenous spatial cross-modal IOR with visual targets; specifically, high rewards may have weakened IOR in the AV condition. Taken together, our study extended the effect of rewards on IOR to cross-modal attention conditions and demonstrated for the first time that higher motivation among individuals under high-reward conditions weakened the cross-modal IOR with visual targets. Moreover, the present study provided evidence for future research on the relationship between reward and attention.
Collapse
Affiliation(s)
| | | | | | | | - Ming Zhang
- Soochow University, China; Okayama University, Japan
| |
Collapse
|
46
|
Smeets JBJ, Brenner E. The cost of aiming for the best answers: Inconsistent perception. Front Integr Neurosci 2023; 17:1118240. [PMID: 37090903 PMCID: PMC10114592 DOI: 10.3389/fnint.2023.1118240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 03/20/2023] [Indexed: 04/05/2023] Open
Abstract
The laws of physics and mathematics describe the world we live in as internally consistent. As these rules provide a very effective description, and our interaction with the world is also very effective, it seems self-evident that our perception follows these laws. As a result, when trying to explain imperfections in perception, we tend to impose consistency and introduce concepts such as deformations of visual space. In this review, we provide numerous examples that show that in many situations we perceive related attributes to have inconsistent values. We discuss how our tendency to assume consistency leads to erroneous conclusions on how we process sensory information. We propose that perception is not about creating a consistent internal representation of the outside world, but about answering specific questions about the outside world. As the information used to answer a question is specific for that question, this naturally leads to inconsistencies in perception and to an apparent dissociation between some perceptual judgments and related actions.
Collapse
|
47
|
Bounmy T, Eger E, Meyniel F. A characterization of the neural representation of confidence during probabilistic learning. Neuroimage 2023; 268:119849. [PMID: 36640947 DOI: 10.1016/j.neuroimage.2022.119849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 12/09/2022] [Accepted: 12/29/2022] [Indexed: 01/13/2023] Open
Abstract
Learning in a stochastic and changing environment is a difficult task. Models of learning typically postulate that observations that deviate from the learned predictions are surprising and used to update those predictions. Bayesian accounts further posit the existence of a confidence-weighting mechanism: learning should be modulated by the confidence level that accompanies those predictions. However, the neural bases of this confidence are much less known than the ones of surprise. Here, we used a dynamic probability learning task and high-field MRI to identify putative cortical regions involved in the representation of confidence about predictions during human learning. We devised a stringent test based on the conjunction of four criteria. We localized several regions in parietal and frontal cortices whose activity is sensitive to the confidence of an ideal observer, specifically so with respect to potential confounds (surprise and predictability), and in a way that is invariant to which item is predicted. We also tested for functionality in two ways. First, we localized regions whose activity patterns at the subject level showed an effect of both confidence and surprise in qualitative agreement with the confidence-weighting principle. Second, we found neural representations of ideal confidence that also accounted for subjective confidence. Taken together, those results identify a set of cortical regions potentially implicated in the confidence-weighting of learning.
Collapse
Affiliation(s)
- Tiffany Bounmy
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France; Université de Paris, Paris, France.
| | - Evelyn Eger
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, France.
| |
Collapse
|
48
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
49
|
van den Brink RL, Hagena K, Wilming N, Murphy PR, Büchel C, Donner TH. Flexible sensory-motor mapping rules manifest in correlated variability of stimulus and action codes across the brain. Neuron 2023; 111:571-584.e9. [PMID: 36476977 DOI: 10.1016/j.neuron.2022.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 10/27/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022]
Abstract
Humans and non-human primates can flexibly switch between different arbitrary mappings from sensation to action to solve a cognitive task. It has remained unknown how the brain implements such flexible sensory-motor mapping rules. Here, we uncovered a dynamic reconfiguration of task-specific correlated variability between sensory and motor brain regions. Human participants switched between two rules for reporting visual orientation judgments during fMRI recordings. Rule switches were either signaled explicitly or inferred by the participants from ambiguous cues. We used behavioral modeling to reconstruct the time course of their belief about the active rule. In both contexts, the patterns of correlations between ongoing fluctuations in stimulus- and action-selective activity across visual- and action-related brain regions tracked participants' belief about the active rule. The rule-specific correlation patterns broke down around the time of behavioral errors. We conclude that internal beliefs about task state are instantiated in brain-wide, selective patterns of correlated variability.
Collapse
Affiliation(s)
- Ruud L van den Brink
- Computational Cognitive Neuroscience Section, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | - Keno Hagena
- Computational Cognitive Neuroscience Section, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Niklas Wilming
- Computational Cognitive Neuroscience Section, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Peter R Murphy
- Computational Cognitive Neuroscience Section, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany; Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, D02 PN40 Dublin, Ireland; Department of Psychology, Maynooth University, Maynooth, Co. Kildare, Ireland
| | - Christian Büchel
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Tobias H Donner
- Computational Cognitive Neuroscience Section, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| |
Collapse
|
50
|
Children and adults rely on different heuristics for estimation of durations. Sci Rep 2023; 13:1077. [PMID: 36658160 PMCID: PMC9852441 DOI: 10.1038/s41598-023-27419-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 01/02/2023] [Indexed: 01/20/2023] Open
Abstract
Time is a uniquely human yet culturally ubiquitous concept acquired over childhood and provides an underlying dimension for episodic memory and estimating durations. Because time, unlike distance, lacks a sensory representation, we hypothesized that subjects at different ages attribute different meanings to it when comparing durations; pre-kindergarten children compare the density of events, while adults use the concept of observer-independent absolute time. We asked groups of pre-kindergarteners, school-age children, and adults to compare the durations of an "eventful" and "uneventful" video, both 1-minute long but durations unknown to subjects. In addition, participants were asked to express the durations of both videos non-verbally with simple hand gestures. Statistical analysis has revealed highly polarized temporal biases in each group, where pre-kindergarteners estimated the duration of the eventful video as "longer." In contrast, the school-age group of children and adults claimed the same about the uneventful video. The tendency to represent temporal durations with a horizontal hand gesture was evident among all three groups, with an increasing prevalence with age. These results support the hypothesis that pre-kindergarten-age children use heuristics to estimate time, and they convert from availability to sampling heuristics between pre-kindergarten and school age.
Collapse
|