1
|
Akella S, Ledochowitsch P, Siegle JH, Belski H, Denman DD, Buice MA, Durand S, Koch C, Olsen SR, Jia X. Deciphering neuronal variability across states reveals dynamic sensory encoding. Nat Commun 2025; 16:1768. [PMID: 39971911 PMCID: PMC11839951 DOI: 10.1038/s41467-025-56733-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 01/29/2025] [Indexed: 02/21/2025] Open
Abstract
Influenced by non-stationary factors such as brain states and behavior, neurons exhibit substantial response variability even to identical stimuli. However, it remains unclear how their relative impact on neuronal variability evolves over time. To address this question, we designed an encoding model conditioned on latent states to partition variability in the mouse visual cortex across internal brain dynamics, behavior, and external visual stimulus. Applying a hidden Markov model to local field potentials, we consistently identified three distinct oscillation states, each with a unique variability profile. Regression models within each state revealed a dynamic composition of factors influencing spiking variability, with the dominant factor switching within seconds. The state-conditioned regression model uncovered extensive diversity in source contributions across units, varying in accordance with anatomical hierarchy and internal state. This heterogeneity in encoding underscores the importance of partitioning variability over time, particularly when considering the influence of non-stationary factors on sensory processing.
Collapse
Affiliation(s)
| | | | | | | | - Daniel D Denman
- Allen Institute, Seattle, WA, USA
- Anschutz Medical Campus School of Medicine, University of Colorado, Aurora, CO, USA
| | | | | | | | | | - Xiaoxuan Jia
- School of Life Science, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Papadopoulos L, Jo S, Zumwalt K, Wehr M, McCormick DA, Mazzucato L. Modulation of metastable ensemble dynamics explains optimal coding at moderate arousal in auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.04.588209. [PMID: 38617286 PMCID: PMC11014582 DOI: 10.1101/2024.04.04.588209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Performance during perceptual decision-making exhibits an inverted-U relationship with arousal, but the underlying network mechanisms remain unclear. Here, we recorded from auditory cortex (A1) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. We found that tone discriminability in A1 ensembles was optimal at intermediate arousal, revealing a population-level neural correlate of the inverted-U relationship. We explained this arousal-dependent coding using a spiking network model with a clustered architecture. Specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi-attractor phase with metastable cluster dynamics (low arousal) and a single-attractor phase (high arousal). Additional signatures of this transition include arousal-induced reductions of overall neural variability and the extent of stimulus-induced variability quenching, which we observed in the empirical data. Altogether, this study elucidates computational principles underlying interactions between pupil-linked arousal, sensory processing, and neural variability, and suggests a role for phase transitions in explaining nonlinear modulations of cortical computations.
Collapse
Affiliation(s)
| | - Suhyun Jo
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Psychology, University of Oregon, Eugene, Oregon
| | - David A McCormick
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Biology, University of Oregon, Eugene, Oregon
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
- Department of Biology, University of Oregon, Eugene, Oregon
- Department of Mathematics, University of Oregon, Eugene, Oregon and Department of Physics, University of Oregon, Eugene, Oregon
| |
Collapse
|
3
|
Hulsey D, Zumwalt K, Mazzucato L, McCormick DA, Jaramillo S. Decision-making dynamics are predicted by arousal and uninstructed movements. Cell Rep 2024; 43:113709. [PMID: 38280196 PMCID: PMC11016285 DOI: 10.1016/j.celrep.2024.113709] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 10/05/2023] [Accepted: 01/10/2024] [Indexed: 01/29/2024] Open
Abstract
During sensory-guided behavior, an animal's decision-making dynamics unfold through sequences of distinct performance states, even while stimulus-reward contingencies remain static. Little is known about the factors that underlie these changes in task performance. We hypothesize that these decision-making dynamics can be predicted by externally observable measures, such as uninstructed movements and changes in arousal. Here, using computational modeling of visual and auditory task performance data from mice, we uncovered lawful relationships between transitions in strategic task performance states and an animal's arousal and uninstructed movements. Using hidden Markov models applied to behavioral choices during sensory discrimination tasks, we find that animals fluctuate between minutes-long optimal, sub-optimal, and disengaged performance states. Optimal state epochs are predicted by intermediate levels, and reduced variability, of pupil diameter and movement. Our results demonstrate that externally observable uninstructed behaviors can predict optimal performance states and suggest that mice regulate their arousal during optimal performance.
Collapse
Affiliation(s)
- Daniel Hulsey
- Institute of Neuroscience, University of Oregon, Eugene, OR 97405, USA
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, OR 97405, USA
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, OR 97405, USA; Department of Biology, University of Oregon, Eugene, OR 97405, USA; Departments of Physics and Mathematics, University of Oregon, Eugene, OR 97405, USA.
| | - David A McCormick
- Institute of Neuroscience, University of Oregon, Eugene, OR 97405, USA; Department of Biology, University of Oregon, Eugene, OR 97405, USA.
| | - Santiago Jaramillo
- Institute of Neuroscience, University of Oregon, Eugene, OR 97405, USA; Department of Biology, University of Oregon, Eugene, OR 97405, USA.
| |
Collapse
|
4
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
5
|
Ogawa S, Fumarola F, Mazzucato L. Multitasking via baseline control in recurrent neural networks. Proc Natl Acad Sci U S A 2023; 120:e2304394120. [PMID: 37549275 PMCID: PMC10437433 DOI: 10.1073/pnas.2304394120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/31/2023] [Indexed: 08/09/2023] Open
Abstract
Changes in behavioral state, such as arousal and movements, strongly affect neural activity in sensory areas, and can be modeled as long-range projections regulating the mean and variance of baseline input currents. What are the computational benefits of these baseline modulations? We investigate this question within a brain-inspired framework for reservoir computing, where we vary the quenched baseline inputs to a recurrent neural network with random couplings. We found that baseline modulations control the dynamical phase of the reservoir network, unlocking a vast repertoire of network phases. We uncovered a number of bistable phases exhibiting the simultaneous coexistence of fixed points and chaos, of two fixed points, and of weak and strong chaos. We identified several phenomena, including noise-driven enhancement of chaos and ergodicity breaking; neural hysteresis, whereby transitions across a phase boundary retain the memory of the preceding phase. In each bistable phase, the reservoir performs a different binary decision-making task. Fast switching between different tasks can be controlled by adjusting the baseline input mean and variance. Moreover, we found that the reservoir network achieves optimal memory performance at any first-order phase boundary. In summary, baseline control enables multitasking without any optimization of the network couplings, opening directions for brain-inspired artificial intelligence and providing an interpretation for the ubiquitously observed behavioral modulations of cortical activity.
Collapse
Affiliation(s)
- Shun Ogawa
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama351-0198, Japan
| | - Francesco Fumarola
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Wako, Saitama351-0198, Japan
| | - Luca Mazzucato
- Department of Biology, Institute of Neuroscience, University of Oregon, Eugene, OR97403
- Department of Mathematics, Institute of Neuroscience, University of Oregon, Eugene, OR97403
| |
Collapse
|
6
|
Hulsey D, Zumwalt K, Mazzucato L, McCormick DA, Jaramillo S. Decision-making dynamics are predicted by arousal and uninstructed movements. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.02.530651. [PMID: 37034793 PMCID: PMC10081205 DOI: 10.1101/2023.03.02.530651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
During sensory-guided behavior, an animal's decision-making dynamics unfold through sequences of distinct performance states, even while stimulus-reward contingencies remain static. Little is known about the factors that underlie these changes in task performance. We hypothesize that these decision-making dynamics can be predicted by externally observable measures, such as uninstructed movements and changes in arousal. Here, combining behavioral experiments in mice with computational modeling, we uncovered lawful relationships between transitions in strategic task performance states and an animal's arousal and uninstructed movements. Using hidden Markov models applied to behavioral choices during sensory discrimination tasks, we found that animals fluctuate between minutes-long optimal, sub-optimal and disengaged performance states. Optimal state epochs were predicted by intermediate levels, and reduced variability, of pupil diameter, along with reduced variability in face movements and locomotion. Our results demonstrate that externally observable uninstructed behaviors can predict optimal performance states, and suggest mice regulate their arousal during optimal performance.
Collapse
Affiliation(s)
- Daniel Hulsey
- Institute of Neuroscience, University of Oregon, Eugene, OR, USA
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, OR, USA
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, OR, USA
- Department of Biology, University of Oregon, Eugene, OR, USA
- Departments of Physics and Mathematics, University of Oregon, Eugene, OR, USA
| | - David A. McCormick
- Institute of Neuroscience, University of Oregon, Eugene, OR, USA
- Department of Biology, University of Oregon, Eugene, OR, USA
| | - Santiago Jaramillo
- Institute of Neuroscience, University of Oregon, Eugene, OR, USA
- Department of Biology, University of Oregon, Eugene, OR, USA
| |
Collapse
|
7
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
| | | | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
8
|
Mazzucato L. Neural mechanisms underlying the temporal organization of naturalistic animal behavior. eLife 2022; 11:e76577. [PMID: 35792884 PMCID: PMC9259028 DOI: 10.7554/elife.76577] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/07/2022] [Indexed: 12/17/2022] Open
Abstract
Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, with variability arising from at least three sources: hierarchical, contextual, and stochastic. What neural mechanisms and computational principles underlie such intricate temporal features? In this review, we provide a critical assessment of the existing behavioral and neurophysiological evidence for these sources of temporal variability in naturalistic behavior. Recent research converges on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising via coordinated interactions between mesoscopic neural circuits. We highlight the crucial role played by structural heterogeneities as well as noise from mesoscopic feedback loops in regulating flexible behavior. We assess the shortcomings and missing links in the current theoretical and experimental literature and propose new directions of investigation to fill these gaps.
Collapse
Affiliation(s)
- Luca Mazzucato
- Institute of Neuroscience, Departments of Biology, Mathematics and Physics, University of OregonEugeneUnited States
| |
Collapse
|
9
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|