1
|
Microstimulation reveals anesthetic state-dependent effective connectivity of neurons in cerebral cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.29.591664. [PMID: 38746366 PMCID: PMC11092428 DOI: 10.1101/2024.04.29.591664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Complex neuronal interactions underlie cortical information processing that can be compromised in altered states of consciousness. Here intracortical microstimulation was applied to investigate the state-dependent effective connectivity of neurons in rat visual cortex in vivo. Extracellular activity was recorded at 32 sites in layers 5/6 while stimulating with charge-balanced discrete pulses at each electrode in random order. The same stimulation pattern was applied at three levels of anesthesia with desflurane and in wakefulness. Spikes were sorted and classified by their waveform features as putative excitatory and inhibitory neurons. Microstimulation caused early (<10ms) increase followed by prolonged (11-100ms) decrease in spiking of all neurons throughout the electrode array. The early response of excitatory but not inhibitory neurons decayed rapidly with distance from the stimulation site over 1mm. Effective connectivity of neurons with significant stimulus response was dense in wakefulness and sparse under anesthesia. Network motifs were identified in graphs of effective connectivity constructed from monosynaptic cross-correlograms. The number of motifs, especially those of higher order, increased rapidly as the anesthesia was withdrawn indicating a substantial increase in network connectivity as the animals woke up. The results illuminate the impact of anesthesia on functional integrity of local circuits affecting the state of consciousness.
Collapse
|
2
|
Directed and acyclic synaptic connectivity in the human layer 2-3 cortical microcircuit. Science 2024; 384:338-343. [PMID: 38635709 DOI: 10.1126/science.adg8828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/12/2024] [Indexed: 04/20/2024]
Abstract
The computational capabilities of neuronal networks are fundamentally constrained by their specific connectivity. Previous studies of cortical connectivity have mostly been carried out in rodents; whether the principles established therein also apply to the evolutionarily expanded human cortex is unclear. We studied network properties within the human temporal cortex using samples obtained from brain surgery. We analyzed multineuron patch-clamp recordings in layer 2-3 pyramidal neurons and identified substantial differences compared with rodents. Reciprocity showed random distribution, synaptic strength was independent from connection probability, and connectivity of the supragranular temporal cortex followed a directed and mostly acyclic graph topology. Application of these principles in neuronal models increased dimensionality of network dynamics, suggesting a critical role for cortical computation.
Collapse
|
3
|
Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
|
4
|
Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
|
5
|
Disentangling Fact from Grid Cell Fiction in Trained Deep Path Integrators. ARXIV 2023:arXiv:2312.03954v3. [PMID: 38106458 PMCID: PMC10723537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Work on deep learning-based models of grid cells suggests that grid cells generically and robustly arise from optimizing networks to path integrate, i.e., track one's spatial position by integrating self-velocity signals. In previous work [27], we challenged this path integration hypothesis by showing that deep neural networks trained to path integrate almost always do so, but almost never learn grid-like tuning unless separately inserted by researchers via mechanisms unrelated to path integration. In this work, we restate the key evidence substantiating these insights, then address a response to [27] by authors of one of the path integration hypothesis papers [32]. First, we show that the response misinterprets our work, indirectly confirming our points. Second, we evaluate the response's preferred "unified theory for the origin of grid cells" in trained deep path integrators [31, 33, 34] and show that it is at best "occasionally suggestive," not exact or comprehensive. We finish by considering why assessing model quality through prediction of biological neural activity by regression of activity in deep networks [23] can lead to the wrong conclusions.
Collapse
|
6
|
Transformation of valence signaling in a striatopallidal circuit. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.01.551547. [PMID: 37577586 PMCID: PMC10418236 DOI: 10.1101/2023.08.01.551547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
The ways in which sensory stimuli acquire motivational valence through association with other stimuli is one of the simplest forms of learning. Though we have identified many brain nuclei that play various roles in reward processing, a significant gap remains in understanding how valence encoding transforms through the layers of sensory processing. To address this gap, we carried out a comparative investigation of the olfactory tubercle (OT), and the ventral pallidum (VP) - 2 connected nuclei of the basal ganglia which have both been implicated in reward processing. First, using anterograde and retrograde tracing, we show that both D1 and D2 neurons of the OT project primarily to the VP and minimally elsewhere. Using 2-photon calcium imaging, we then investigated how the identity of the odor and reward contingency of the odor are differently encoded by neurons in either structure during a classical conditioning paradigm. We find that VP neurons robustly encode reward contingency, but not identity, in low-dimensional space. In contrast, OT neurons primarily encode odor identity in high-dimensional space. Though D1 OT neurons showed larger response vectors to rewarded odors than other odors, we propose this is better interpreted as identity encoding with enhanced contrast rather than as valence encoding. Finally, using a novel conditioning paradigm that decouples reward contingency and licking vigor, we show that both features are encoded by non-overlapping VP neurons. These results provide a novel framework for the striatopallidal circuit in which a high-dimensional encoding of stimulus identity is collapsed onto a low-dimensional encoding of motivational valence.
Collapse
|
7
|
Signatures of task learning in neural representations. Curr Opin Neurobiol 2023; 83:102759. [PMID: 37708653 DOI: 10.1016/j.conb.2023.102759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/28/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.
Collapse
|
8
|
A manifold neural population code for space in hippocampal coactivity dynamics independent of place fields. Cell Rep 2023; 42:113142. [PMID: 37742193 PMCID: PMC10842170 DOI: 10.1016/j.celrep.2023.113142] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 06/14/2023] [Accepted: 08/30/2023] [Indexed: 09/26/2023] Open
Abstract
Hippocampus place cell discharge is temporally unreliable across seconds and days, and place fields are multimodal, suggesting an "ensemble cofiring" spatial coding hypothesis with manifold dynamics that does not require reliable spatial tuning, in contrast to hypotheses based on place field (spatial tuning) stability. We imaged mouse CA1 (cornu ammonis 1) ensembles in two environments across three weeks to evaluate these coding hypotheses. While place fields "remap," being more distinct between than within environments, coactivity relationships generally change less. Decoding location and environment from 1-s ensemble location-specific activity is effective and improves with experience. Decoding environment from cell-pair coactivity relationships is also effective and improves with experience, even after removing place tuning. Discriminating environments from 1-s ensemble coactivity relies crucially on the cells with the most anti-coactive cell-pair relationships because activity is internally organized on a low-dimensional manifold of non-linear coactivity relationships that intermittently reregisters to environments according to the anti-cofiring subpopulation activity.
Collapse
|
9
|
Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
|
10
|
Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
|
11
|
Dynamic structure of motor cortical neuron coactivity carries behaviorally relevant information. Netw Neurosci 2023; 7:661-678. [PMID: 37397877 PMCID: PMC10312288 DOI: 10.1162/netn_a_00298] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 12/02/2022] [Indexed: 01/28/2024] Open
Abstract
Skillful, voluntary movements are underpinned by computations performed by networks of interconnected neurons in the primary motor cortex (M1). Computations are reflected by patterns of coactivity between neurons. Using pairwise spike time statistics, coactivity can be summarized as a functional network (FN). Here, we show that the structure of FNs constructed from an instructed-delay reach task in nonhuman primates is behaviorally specific: Low-dimensional embedding and graph alignment scores show that FNs constructed from closer target reach directions are also closer in network space. Using short intervals across a trial, we constructed temporal FNs and found that temporal FNs traverse a low-dimensional subspace in a reach-specific trajectory. Alignment scores show that FNs become separable and correspondingly decodable shortly after the Instruction cue. Finally, we observe that reciprocal connections in FNs transiently decrease following the Instruction cue, consistent with the hypothesis that information external to the recorded population temporarily alters the structure of the network at this moment.
Collapse
|
12
|
The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
|
13
|
Multitask representations in the human cortex transform along a sensory-to-motor hierarchy. Nat Neurosci 2023; 26:306-315. [PMID: 36536240 DOI: 10.1038/s41593-022-01224-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 10/28/2022] [Indexed: 12/24/2022]
Abstract
Human cognition recruits distributed neural processes, yet the organizing computational and functional architectures remain unclear. Here, we characterized the geometry and topography of multitask representations across the human cortex using functional magnetic resonance imaging during 26 cognitive tasks in the same individuals. We measured the representational similarity across tasks within a region and the alignment of representations between regions. Representational alignment varied in a graded manner along the sensory-association-motor axis. Multitask dimensionality exhibited compression then expansion along this gradient. To investigate computational principles of multitask representations, we trained multilayer neural network models to transform empirical visual-to-motor representations. Compression-then-expansion organization in models emerged exclusively in a rich training regime, which is associated with learning optimized representations that are robust to noise. This regime produces hierarchically structured representations similar to empirical cortical patterns. Together, these results reveal computational principles that organize multitask representations across the human cortex to support multitask cognition.
Collapse
|
14
|
Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
|
15
|
Cognition and the single neuron: How cell types construct the dynamic computations of frontal cortex. Curr Opin Neurobiol 2022; 77:102630. [PMID: 36209695 PMCID: PMC10375540 DOI: 10.1016/j.conb.2022.102630] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 01/10/2023]
Abstract
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.
Collapse
|
16
|
The geometry of representational drift in natural and artificial neural networks. PLoS Comput Biol 2022; 18:e1010716. [PMID: 36441762 PMCID: PMC9731438 DOI: 10.1371/journal.pcbi.1010716] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/08/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
Collapse
|
17
|
Non-Separability of Physical Systems as a Foundation of Consciousness. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1539. [PMID: 36359629 PMCID: PMC9689906 DOI: 10.3390/e24111539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/22/2022] [Accepted: 10/23/2022] [Indexed: 06/16/2023]
Abstract
A hypothesis is presented that non-separability of degrees of freedom is the fundamental property underlying consciousness in physical systems. The amount of consciousness in a system is determined by the extent of non-separability and the number of degrees of freedom involved. Non-interacting and feedforward systems have zero consciousness, whereas most systems of interacting particles appear to have low non-separability and consciousness. By contrast, brain circuits exhibit high complexity and weak but tightly coordinated interactions, which appear to support high non-separability and therefore high amount of consciousness. The hypothesis applies to both classical and quantum cases, and we highlight the formalism employing the Wigner function (which in the classical limit becomes the Liouville density function) as a potentially fruitful framework for characterizing non-separability and, thus, the amount of consciousness in a system. The hypothesis appears to be consistent with both the Integrated Information Theory and the Orchestrated Objective Reduction Theory and may help reconcile the two. It offers a natural explanation for the physical properties underlying the amount of consciousness and points to methods of estimating the amount of non-separability as promising ways of characterizing the amount of consciousness.
Collapse
|
18
|
The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS Comput Biol 2022; 18:e1010327. [PMID: 35862445 PMCID: PMC9345493 DOI: 10.1371/journal.pcbi.1010327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/02/2022] [Accepted: 06/24/2022] [Indexed: 11/18/2022] Open
Abstract
A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics’ Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data. Here we study the distribution of eigenvalues, or spectrum, of the neuron-to-neuron covariance matrix in recurrently connected neuronal networks. The covariance spectrum is an important global feature of neuron population dynamics that requires simultaneous recordings of neurons. The spectrum is essential to the widely used Principal Component Analysis (PCA) and generalizes the dimensionality measure of population dynamics. We use a simple model to emulate the complex connections between neurons, where all pairs of neurons interact linearly at a strength specified randomly and independently. We derive a closed-form expression of the covariance spectrum, revealing an interesting long tail of large eigenvalues following a power law as the connection strength increases. To incorporate connectivity features important to biological neural circuits, we generalize the result to networks with an additional low-rank connectivity component that could come from learning and networks consisting of sparsely connected excitatory and inhibitory neurons. To facilitate comparing the theoretical results to experimental data, we derive the precise modifications needed to account for the effect of limited time samples and having unobserved neurons. Preliminary applications to large-scale calcium imaging data suggest our model can well capture the high dimensional population activity of neurons.
Collapse
|
19
|
Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
|
20
|
Skilled independent control of individual motor units via a non-invasive neuromuscular-machine interface. J Neural Eng 2021; 18. [PMID: 34727532 DOI: 10.1088/1741-2552/ac35ac] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/02/2021] [Indexed: 11/11/2022]
Abstract
Objective.Brain-machine interfaces (BMIs) have the potential to augment human functions and restore independence in people with disabilities, yet a compromise between non-invasiveness and performance limits their relevance.Approach.Here, we hypothesized that a non-invasive neuromuscular-machine interface providing real-time neurofeedback of individual motor units within a muscle could enable independent motor unit control to an extent suitable for high-performance BMI applications.Main results.Over 6 days of training, eight participants progressively learned to skillfully and independently control three biceps brachii motor units to complete a 2D center-out task. We show that neurofeedback enabled motor unit activity that largely violated recruitment constraints observed during ramp-and-hold isometric contractions thought to limit individual motor unit controllability. Finally, participants demonstrated the suitability of individual motor units for powering general applications through a spelling task.Significance.These results illustrate the flexibility of the sensorimotor system and highlight individual motor units as a promising source of control for BMI applications.
Collapse
|
21
|
Low-Dimensional Dynamics of Brain Activity Associated with Manual Acupuncture in Healthy Subjects. SENSORS 2021; 21:s21227432. [PMID: 34833508 PMCID: PMC8619579 DOI: 10.3390/s21227432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/03/2021] [Accepted: 11/06/2021] [Indexed: 11/24/2022]
Abstract
Acupuncture is one of the oldest traditional medical treatments in Asian countries. However, the scientific explanation regarding the therapeutic effect of acupuncture is still unknown. The much-discussed hypothesis it that acupuncture’s effects are mediated via autonomic neural networks; nevertheless, dynamic brain activity involved in the acupuncture response has still not been elicited. In this work, we hypothesized that there exists a lower-dimensional subspace of dynamic brain activity across subjects, underpinning the brain’s response to manual acupuncture stimulation. To this end, we employed a variational auto-encoder to probe the latent variables from multichannel EEG signals associated with acupuncture stimulation at the ST36 acupoint. The experimental results demonstrate that manual acupuncture stimuli can reduce the dimensionality of brain activity, which results from the enhancement of oscillatory activity in the delta and alpha frequency bands induced by acupuncture. Moreover, it was found that large-scale brain activity could be constrained within a low-dimensional neural subspace, which is spanned by the “acupuncture mode”. In each neural subspace, the steady dynamics of the brain in response to acupuncture stimuli converge to topologically similar elliptic-shaped attractors across different subjects. The attractor morphology is closely related to the frequency of the acupuncture stimulation. These results shed light on probing the large-scale brain response to manual acupuncture stimuli.
Collapse
|
22
|
Model Reduction Captures Stochastic Gamma Oscillations on Low-Dimensional Manifolds. Front Comput Neurosci 2021; 15:678688. [PMID: 34489666 PMCID: PMC8418102 DOI: 10.3389/fncom.2021.678688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 07/23/2021] [Indexed: 12/02/2022] Open
Abstract
Gamma frequency oscillations (25–140 Hz), observed in the neural activities within many brain regions, have long been regarded as a physiological basis underlying many brain functions, such as memory and attention. Among numerous theoretical and computational modeling studies, gamma oscillations have been found in biologically realistic spiking network models of the primary visual cortex. However, due to its high dimensionality and strong non-linearity, it is generally difficult to perform detailed theoretical analysis of the emergent gamma dynamics. Here we propose a suite of Markovian model reduction methods with varying levels of complexity and apply it to spiking network models exhibiting heterogeneous dynamical regimes, ranging from nearly homogeneous firing to strong synchrony in the gamma band. The reduced models not only successfully reproduce gamma oscillations in the full model, but also exhibit the same dynamical features as we vary parameters. Most remarkably, the invariant measure of the coarse-grained Markov process reveals a two-dimensional surface in state space upon which the gamma dynamics mainly resides. Our results suggest that the statistical features of gamma oscillations strongly depend on the subthreshold neuronal distributions. Because of the generality of the Markovian assumptions, our dimensional reduction methods offer a powerful toolbox for theoretical examinations of other complex cortical spatio-temporal behaviors observed in both neurophysiological experiments and numerical simulations.
Collapse
|
23
|
Bridging neuronal correlations and dimensionality reduction. Neuron 2021; 109:2740-2754.e12. [PMID: 34293295 PMCID: PMC8505167 DOI: 10.1016/j.neuron.2021.06.028] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 01/01/2023]
Abstract
Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.
Collapse
|
24
|
Novelty and imitation within the brain: a Darwinian neurodynamic approach to combinatorial problems. Sci Rep 2021; 11:12513. [PMID: 34131159 PMCID: PMC8206098 DOI: 10.1038/s41598-021-91489-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 05/21/2021] [Indexed: 02/05/2023] Open
Abstract
Efficient search in vast combinatorial spaces, such as those of possible action sequences, linguistic structures, or causal explanations, is an essential component of intelligence. Is there any computational domain that is flexible enough to provide solutions to such diverse problems and can be robustly implemented over neural substrates? Based on previous accounts, we propose that a Darwinian process, operating over sequential cycles of imperfect copying and selection of neural informational patterns, is a promising candidate. Here we implement imperfect information copying through one reservoir computing unit teaching another. Teacher and learner roles are assigned dynamically based on evaluation of the readout signal. We demonstrate that the emerging Darwinian population of readout activity patterns is capable of maintaining and continually improving upon existing solutions over rugged combinatorial reward landscapes. We also demonstrate the existence of a sharp error threshold, a neural noise level beyond which information accumulated by an evolutionary process cannot be maintained. We introduce a novel analysis method, neural phylogenies, that displays the unfolding of the neural-evolutionary process.
Collapse
|
25
|
Abstract
Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the brain that are largely implicit in the literature. The Sherringtonian view seeks to explain cognition as the result of operations on signals performed at nodes in a network and passed between them that are implemented by specific neurons and their connections in circuits in the brain. The contrasting Hopfieldian view explains cognition as the result of transformations between or movement within representational spaces that are implemented by neural populations. Thus, the Hopfieldian view relegates details regarding the identity of and connections between specific neurons to the status of secondary explainers. Only the Hopfieldian approach has the representational and computational resources needed to develop novel neurofunctional objects that can serve as primary explainers of cognition.
Collapse
|
26
|
Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
|
27
|
Measuring and modeling whole-brain neural dynamics in Caenorhabditis elegans. Curr Opin Neurobiol 2020; 65:167-175. [PMID: 33279794 PMCID: PMC7801769 DOI: 10.1016/j.conb.2020.11.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 11/19/2022]
Abstract
The compact nervous system of the nematode Caenorhabditis elegans makes it a powerful playground to study how neural dynamics constrained by neuroanatomy generate neural function and behavior. The ability to record neural activity from the whole brain simultaneously in this worm has opened several research avenues and is providing insights into brain-wide neural coding of locomotion, sleep, and other behaviors. We review these findings and the development of new methods, including new microscopes, new genetic tools, and new modeling approaches. We conclude with a discussion of the role of theory in interpreting or driving new experiments in C. elegans and potential paths forward.
Collapse
|
28
|
Abstract
Our decisions often depend on multiple sensory experiences separated by time delays. The brain can remember these experiences and, simultaneously, estimate the timing between events. To understand the mechanisms underlying working memory and time encoding, we analyze neural activity recorded during delays in four experiments on nonhuman primates. To disambiguate potential mechanisms, we propose two analyses, namely, decoding the passage of time from neural data and computing the cumulative dimensionality of the neural trajectory over time. Time can be decoded with high precision in tasks where timing information is relevant and with lower precision when irrelevant for performing the task. Neural trajectories are always observed to be low-dimensional. In addition, our results further constrain the mechanisms underlying time encoding as we find that the linear "ramping" component of each neuron's firing rate strongly contributes to the slow timescale variations that make decoding time possible. These constraints rule out working memory models that rely on constant, sustained activity and neural networks with high-dimensional trajectories, like reservoir networks. Instead, recurrent networks trained with backpropagation capture the time-encoding properties and the dimensionality observed in the data.
Collapse
|
29
|
Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Comput Biol 2020; 16:e1008128. [PMID: 32785228 PMCID: PMC7446915 DOI: 10.1371/journal.pcbi.1008128] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 08/24/2020] [Accepted: 07/08/2020] [Indexed: 12/11/2022] Open
Abstract
Many cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.
Collapse
|
30
|
Impact of higher order network structure on emergent cortical activity. Netw Neurosci 2020; 4:292-314. [PMID: 32181420 PMCID: PMC7069066 DOI: 10.1162/netn_a_00124] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 12/23/2019] [Indexed: 11/04/2022] Open
Abstract
Synaptic connectivity between neocortical neurons is highly structured. The network structure of synaptic connectivity includes first-order properties that can be described by pairwise statistics, such as strengths of connections between different neuron types and distance-dependent connectivity, and higher order properties, such as an abundance of cliques of all-to-all connected neurons. The relative impact of first- and higher order structure on emergent cortical network activity is unknown. Here, we compare network structure and emergent activity in two neocortical microcircuit models with different synaptic connectivity. Both models have a similar first-order structure, but only one model includes higher order structure arising from morphological diversity within neuronal types. We find that such morphological diversity leads to more heterogeneous degree distributions, increases the number of cliques, and contributes to a small-world topology. The increase in higher order network structure is accompanied by more nuanced changes in neuronal firing patterns, such as an increased dependence of pairwise correlations on the positions of neurons in cliques. Our study shows that circuit models with very similar first-order structure of synaptic connectivity can have a drastically different higher order network structure, and suggests that the higher order structure imposed by morphological diversity within neuronal types has an impact on emergent cortical activity.
Collapse
|
31
|
Bridging Single Neuron Dynamics to Global Brain States. Front Syst Neurosci 2019; 13:75. [PMID: 31866837 PMCID: PMC6908479 DOI: 10.3389/fnsys.2019.00075] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 11/19/2019] [Indexed: 11/13/2022] Open
Abstract
Biological neural networks produce information backgrounds of multi-scale spontaneous activity that become more complex in brain states displaying higher capacities for cognition, for instance, attentive awake versus asleep or anesthetized states. Here, we review brain state-dependent mechanisms spanning ion channel currents (microscale) to the dynamics of brain-wide, distributed, transient functional assemblies (macroscale). Not unlike how microscopic interactions between molecules underlie structures formed in macroscopic states of matter, using statistical physics, the dynamics of microscopic neural phenomena can be linked to macroscopic brain dynamics through mesoscopic scales. Beyond spontaneous dynamics, it is observed that stimuli evoke collapses of complexity, most remarkable over high dimensional, asynchronous, irregular background dynamics during consciousness. In contrast, complexity may not be further collapsed beyond synchrony and regularity characteristic of unconscious spontaneous activity. We propose that increased dimensionality of spontaneous dynamics during conscious states supports responsiveness, enhancing neural networks' emergent capacity to robustly encode information over multiple scales.
Collapse
|
32
|
Re-evaluating Circuit Mechanisms Underlying Pattern Separation. Neuron 2019; 101:584-602. [PMID: 30790539 PMCID: PMC7028396 DOI: 10.1016/j.neuron.2019.01.044] [Citation(s) in RCA: 118] [Impact Index Per Article: 23.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 01/07/2019] [Accepted: 01/18/2019] [Indexed: 11/22/2022]
Abstract
When animals interact with complex environments, their neural circuits must separate overlapping patterns of activity that represent sensory and motor information. Pattern separation is thought to be a key function of several brain regions, including the cerebellar cortex, insect mushroom body, and dentate gyrus. However, recent findings have questioned long-held ideas on how these circuits perform this fundamental computation. Here, we re-evaluate the functional and structural mechanisms underlying pattern separation. We argue that the dimensionality of the space available for population codes representing sensory and motor information provides a common framework for understanding pattern separation. We then discuss how these three circuits use different strategies to separate activity patterns and facilitate associative learning in the presence of trial-to-trial variability.
Collapse
|