1
|
An operating principle of the cerebral cortex, and a cellular mechanism for attentional trial-and-error pattern learning and useful classification extraction. Front Neural Circuits 2024; 18:1280604. [PMID: 38505865 PMCID: PMC10950307 DOI: 10.3389/fncir.2024.1280604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/13/2024] [Indexed: 03/21/2024] Open
Abstract
A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention toward important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization cross-induction, enabling minicolumn functions such as the creation of powerful hierarchical "hyperneurons" and the internal representation of the external world. We suggest building blocks to extend the microcircuit theory to network-level processing, which, interestingly, yields variants resembling the artificial neural networks currently in use. On a more speculative note, we conjecture that principles of intelligence in universes governed by certain types of physical laws might resemble ours.
Collapse
|
2
|
Neural heterogeneity controls computations in spiking neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311885121. [PMID: 38198531 PMCID: PMC10801870 DOI: 10.1073/pnas.2311885121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024] Open
Abstract
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Collapse
|
3
|
Excitation creates a distributed pattern of cortical suppression due to varied recurrent input. Neuron 2023; 111:4086-4101.e5. [PMID: 37865083 PMCID: PMC10872553 DOI: 10.1016/j.neuron.2023.09.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/23/2023]
Abstract
Dense local, recurrent connections are a major feature of cortical circuits, yet how they affect neurons' responses has been unclear, with some studies reporting weak recurrent effects, some reporting amplification, and others indicating local suppression. Here, we show that optogenetic input to mouse V1 excitatory neurons generates salt-and-pepper patterns of both excitation and suppression. Responses in individual neurons are not strongly predicted by that neuron's direct input. A balanced-state network model reconciles a set of diverse observations: the observed dynamics, suppressed responses, decoupling of input and output, and long tail of excited responses. The model shows recurrent excitatory-excitatory connections are strong and also variable across neurons. Together, these results demonstrate that excitatory recurrent connections can have major effects on cortical computations by shaping and changing neurons' responses to input.
Collapse
|
4
|
A thermodynamical model of non-deterministic computation in cortical neural networks. Phys Biol 2023; 21:016003. [PMID: 38078366 DOI: 10.1088/1478-3975/ad0f2d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023]
Abstract
Neuronal populations in the cerebral cortex engage in probabilistic coding, effectively encoding the state of the surrounding environment with high accuracy and extraordinary energy efficiency. A new approach models the inherently probabilistic nature of cortical neuron signaling outcomes as a thermodynamic process of non-deterministic computation. A mean field approach is used, with the trial Hamiltonian maximizing available free energy and minimizing the net quantity of entropy, compared with a reference Hamiltonian. Thermodynamic quantities are always conserved during the computation; free energy must be expended to produce information, and free energy is released during information compression, as correlations are identified between the encoding system and its surrounding environment. Due to the relationship between the Gibbs free energy equation and the Nernst equation, any increase in free energy is paired with a local decrease in membrane potential. As a result, this process of thermodynamic computation adjusts the likelihood of each neuron firing an action potential. This model shows that non-deterministic signaling outcomes can be achieved by noisy cortical neurons, through an energy-efficient computational process that involves optimally redistributing a Hamiltonian over some time evolution. Calculations demonstrate that the energy efficiency of the human brain is consistent with this model of non-deterministic computation, with net entropy production far too low to retain the assumptions of a classical system.
Collapse
|
5
|
A neural mechanism for terminating decisions. Neuron 2023; 111:2601-2613.e5. [PMID: 37352857 PMCID: PMC10565788 DOI: 10.1016/j.neuron.2023.05.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 06/25/2023]
Abstract
The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are established in association cortex, but the site and mechanism of termination are unknown. Here, we show that the superior colliculus (SC) plays a causal role in terminating decisions, and we provide evidence for a mechanism by which this occurs. We recorded simultaneously from neurons in the lateral intraparietal area (LIP) and SC while monkeys made perceptual decisions. Despite similar trial-averaged activity, we found distinct single-trial dynamics in the two areas: LIP displayed drift-diffusion dynamics and SC displayed bursting dynamics. We hypothesized that the bursts manifest a threshold mechanism applied to signals represented in LIP to terminate the decision. Consistent with this hypothesis, SC inactivation produced behavioral effects diagnostic of an impaired threshold sensor and prolonged the buildup of activity in LIP. The results reveal the transformation from deliberation to commitment.
Collapse
|
6
|
Abstract
How neurons detect the direction of motion is a prime example of neural computation: Motion vision is found in the visual systems of virtually all sighted animals, it is important for survival, and it requires interesting computations with well-defined linear and nonlinear processing steps-yet the whole process is of moderate complexity. The genetic methods available in the fruit fly Drosophila and the charting of a connectome of its visual system have led to rapid progress and unprecedented detail in our understanding of how neurons compute the direction of motion in this organism. The picture that emerged incorporates not only the identity, morphology, and synaptic connectivity of each neuron involved but also its neurotransmitters, its receptors, and their subcellular localization. Together with the neurons' membrane potential responses to visual stimulation, this information provides the basis for a biophysically realistic model of the circuit that computes the direction of visual motion.
Collapse
|
7
|
Object-centered population coding in CA1 of the hippocampus. Neuron 2023; 111:2091-2104.e14. [PMID: 37148872 DOI: 10.1016/j.neuron.2023.04.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/22/2022] [Accepted: 04/07/2023] [Indexed: 05/08/2023]
Abstract
Objects and landmarks are crucial for guiding navigation and must be integrated into the cognitive map of space. Studies of object coding in the hippocampus have primarily focused on activity of single cells. Here, we record simultaneously from large numbers of hippocampal CA1 neurons to determine how the presence of a salient object in the environment alters single-neuron and neural-population activity of the area. The majority of the cells showed some change in their spatial firing patterns when the object was introduced. At the neural-population level, these changes were systematically organized according to the animal's distance from the object. This organization was widely distributed across the cell sample, suggesting that some features of cognitive maps-including object representation-are best understood as emergent properties of neural populations.
Collapse
|
8
|
Amplified cortical neural responses as animals learn to use novel activity patterns. Curr Biol 2023; 33:2163-2174.e4. [PMID: 37148876 DOI: 10.1016/j.cub.2023.04.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 02/09/2023] [Accepted: 04/14/2023] [Indexed: 05/08/2023]
Abstract
Cerebral cortex supports representations of the world in patterns of neural activity, used by the brain to make decisions and guide behavior. Past work has found diverse, or limited, changes in the primary sensory cortex in response to learning, suggesting that the key computations might occur in downstream regions. Alternatively, sensory cortical changes may be central to learning. We studied cortical learning by using controlled inputs we insert: we trained mice to recognize entirely novel, non-sensory patterns of cortical activity in the primary visual cortex (V1) created by optogenetic stimulation. As animals learned to use these novel patterns, we found that their detection abilities improved by an order of magnitude or more. The behavioral change was accompanied by large increases in V1 neural responses to fixed optogenetic input. Neural response amplification to novel optogenetic inputs had little effect on existing visual sensory responses. A recurrent cortical model shows that this amplification can be achieved by a small mean shift in recurrent network synaptic strength. Amplification would seem to be desirable to improve decision-making in a detection task; therefore, these results suggest that adult recurrent cortical plasticity plays a significant role in improving behavioral performance during learning.
Collapse
|
9
|
Attention along the cortical hierarchy: Development matters. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1575. [PMID: 34480779 DOI: 10.1002/wcs.1575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 07/28/2021] [Accepted: 07/30/2021] [Indexed: 01/17/2023]
Abstract
We build on the existing biased competition view to argue that attention is an emergent property of neural computations within and across hierarchically embedded and structurally connected cortical pathways. Critically then, one must ask, what is attention emergent from? Within this framework, developmental changes in the quality of sensory input and feedforward-feedback information flow shape the emergence and efficiency of attention. Several gradients of developing structural and functional cortical architecture across the caudal-to-rostral axis provide the substrate for attention to emerge. Neural activity within visual areas depends on neuronal density, receptive field size, tuning properties of neurons, and the location of and competition between features and objects in the visual field. These visual cortical properties highlight the information processing bottleneck attention needs to resolve. Recurrent feedforward and feedback connections convey sensory information through a series of steps at each level of the cortical hierarchy, integrating sensory information across the entire extent of the cortical hierarchy and linking sensory processing to higher-order brain regions. Higher-order regions concurrently provide input conveying behavioral context and goals. Thus, attention reflects the output of a series of complex biased competition neural computations that occur within and across hierarchically embedded cortical regions. Cortical development proceeds along the caudal-to-rostral axis, mirroring the flow in sensory information from caudal to rostral regions, and visual processing continues to develop into childhood. Examining both typical and atypical development will offer critical mechanistic insight not otherwise available in the adult stable state. This article is categorized under: Psychology > Attention.
Collapse
|
10
|
Systematic reduction of the dimensionality of natural scenes allows accurate predictions of retinal ganglion cell spike outputs. Proc Natl Acad Sci U S A 2022; 119:e2121744119. [PMID: 36343230 PMCID: PMC9674269 DOI: 10.1073/pnas.2121744119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/23/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.
Collapse
|
11
|
Contrast polarity-specific mapping improves efficiency of neuronal computation for collision detection. eLife 2022; 11:e79772. [PMID: 36314775 PMCID: PMC9674337 DOI: 10.7554/elife.79772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 10/27/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons receive information through their synaptic inputs, but the functional significance of how those inputs are mapped on to a cell's dendrites remains unclear. We studied this question in a grasshopper visual neuron that tracks approaching objects and triggers escape behavior before an impending collision. In response to black approaching objects, the neuron receives OFF excitatory inputs that form a retinotopic map of the visual field onto compartmentalized, distal dendrites. Subsequent processing of these OFF inputs by active membrane conductances allows the neuron to discriminate the spatial coherence of such stimuli. In contrast, we show that ON excitatory synaptic inputs activated by white approaching objects map in a random manner onto a more proximal dendritic field of the same neuron. The lack of retinotopic synaptic arrangement results in the neuron's inability to discriminate the coherence of white approaching stimuli. Yet, the neuron retains the ability to discriminate stimulus coherence for checkered stimuli of mixed ON/OFF polarity. The coarser mapping and processing of ON stimuli thus has a minimal impact, while reducing the total energetic cost of the circuit. Further, we show that these differences in ON/OFF neuronal processing are behaviorally relevant, being tightly correlated with the animal's escape behavior to light and dark stimuli of variable coherence. Our results show that the synaptic mapping of excitatory inputs affects the fine stimulus discrimination ability of single neurons and document the resulting functional impact on behavior.
Collapse
|
12
|
BK channels at dendritic spines: A mechanism for coupling morphology, plasticity and information storage? J Physiol 2022; 600:3399-3401. [PMID: 35748599 DOI: 10.1113/jp283232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
|
13
|
Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
|
14
|
Parallel processing in active dendrites during periods of intense spiking activity. Cell Rep 2022; 38:110412. [PMID: 35196499 DOI: 10.1016/j.celrep.2022.110412] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/15/2021] [Accepted: 01/28/2022] [Indexed: 12/17/2022] Open
Abstract
A neuron's ability to perform parallel computations throughout its dendritic arbor substantially improves its computational capacity. However, during natural patterns of activity, the degree to which computations remain compartmentalized, especially in neurons with active dendritic trees, is not clear. Here, we examine how the direction of moving objects is computed across the bistratified dendritic arbors of ON-OFF direction-selective ganglion cells (DSGCs) in the mouse retina. We find that although local synaptic signals propagate efficiently throughout their dendritic trees, direction-selective computations in one part of the dendritic arbor have little effect on those being made elsewhere. Independent dendritic processing allows DSGCs to compute the direction of moving objects multiple times as they traverse their receptive fields, enabling them to rapidly detect changes in motion direction on a sub-receptive-field basis. These results demonstrate that the parallel processing capacity of neurons can be maintained even during periods of intense synaptic activity.
Collapse
|
15
|
Dissociating Value-Based Neurocomputation from Subsequent Selection-Related Activations in Human Decision-Making. Cereb Cortex 2022; 32:4141-4155. [PMID: 35024797 DOI: 10.1093/cercor/bhab471] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022] Open
Abstract
Human decision-making requires the brain to fulfill neural computation of benefit and risk and therewith a selection between options. It remains unclear how value-based neural computation and subsequent brain activity evolve to achieve a final decision and which process is modulated by irrational factors. We adopted a sequential risk-taking task that asked participants to successively decide whether to open a box with potential reward/punishment in an eight-box trial, or not to open. With time-resolved multivariate pattern analyses, we decoded electroencephalography and magnetoencephalography responses to two successive low- and high-risk boxes before open-box action. Referencing the specificity of decoding-accuracy peak to a first-stage processing completion, we set it as the demarcation and dissociated the neural time course of decision-making into valuation and selection stages. The behavioral hierarchical drift diffusion modeling confirmed different information processing in two stages, that is, the valuation stage was related to the drift rate of evidence accumulation, while the selection stage was related to the nondecision time spent in response-producing. We further observed that medial orbitofrontal cortex participated in the valuation stage, while superior frontal gyrus engaged in the selection stage of irrational open-box decisions. Afterward, we revealed that irrational factors influenced decision-making through the selection stage rather than the valuation stage.
Collapse
|
16
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
|
17
|
Mind Causality: A Computational Neuroscience Approach. Front Comput Neurosci 2021; 15:706505. [PMID: 34305562 PMCID: PMC8295486 DOI: 10.3389/fncom.2021.706505] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 06/10/2021] [Indexed: 11/13/2022] Open
Abstract
A neuroscience-based approach has recently been proposed for the relation between the mind and the brain. The proposal is that events at the sub-neuronal, neuronal, and neuronal network levels take place simultaneously to perform a computation that can be described at a high level as a mental state, with content about the world. It is argued that as the processes at the different levels of explanation take place at the same time, they are linked by a non-causal supervenient relationship: causality can best be described in brains as operating within but not between levels. This mind-brain theory allows mental events to be different in kind from the mechanistic events that underlie them; but does not lead one to argue that mental events cause brain events, or vice versa: they are different levels of explanation of the operation of the computational system. Here, some implications are developed. It is proposed that causality, at least as it applies to the brain, should satisfy three conditions. First, interventionist tests for causality must be satisfied. Second, the causally related events should be at the same level of explanation. Third, a temporal order condition must be satisfied, with a suitable time scale in the order of 10 ms (to exclude application to quantum physics; and a cause cannot follow an effect). Next, although it may be useful for different purposes to describe causality involving the mind and brain at the mental level, or at the brain level, it is argued that the brain level may sometimes be more accurate, for sometimes causal accounts at the mental level may arise from confabulation by the mentalee, whereas understanding exactly what computations have occurred in the brain that result in a choice or action will provide the correct causal account for why a choice or action was made. Next, it is argued that possible cases of "downward causation" can be accounted for by a within-levels-of-explanation account of causality. This computational neuroscience approach provides an opportunity to proceed beyond Cartesian dualism and physical reductionism in considering the relations between the mind and the brain.
Collapse
|
18
|
Antagonistic Center-Surround Mechanisms for Direction Selectivity in the Retina. Cell Rep 2021; 31:107608. [PMID: 32375036 PMCID: PMC7221349 DOI: 10.1016/j.celrep.2020.107608] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 01/22/2020] [Accepted: 04/13/2020] [Indexed: 12/29/2022] Open
Abstract
An antagonistic center-surround receptive field is a key feature in sensory processing, but how it contributes to specific computations such as direction selectivity is often unknown. Retinal On-starburst amacrine cells (SACs), which mediate direction selectivity in direction-selective ganglion cells (DSGCs), exhibit antagonistic receptive field organization: depolarizing to light increments and decrements in their center and surround, respectively. We find that a repetitive stimulation exhausts SAC center and enhances its surround and use it to study how center-surround responses contribute to direction selectivity. Center, but not surround, activation induces direction-selective responses in SACs. Nevertheless, both SAC center and surround elicited direction-selective responses in DSGCs, but to opposite directions. Physiological and modeling data suggest that the opposing direction selectivity can result from inverted temporal balance between excitation and inhibition in DSGCs, implying that SAC's response timing dictates direction selectivity. Our findings reveal antagonistic center-surround mechanisms for direction selectivity and demonstrate how context-dependent receptive field reorganization enables flexible computations.
Collapse
|
19
|
Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Proc Natl Acad Sci U S A 2021; 118:e2008173118. [PMID: 33906943 PMCID: PMC8106317 DOI: 10.1073/pnas.2008173118] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843-1864 (1960)] and [J. Sawada, D. S. Modha, "Synapse: Scalable energy-efficient neurosynaptic computing" in Application of Concurrency to System Design (ACSD) (2013), pp. 14-15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a [Formula: see text]-fold discrepancy between biological and lowest possible values of a neuron's computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).
Collapse
|
20
|
Multimodal integration across spatiotemporal scales to guide invertebrate locomotion. Integr Comp Biol 2021; 61:842-853. [PMID: 34009312 DOI: 10.1093/icb/icab041] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Locomotion is a hallmark of organisms that has enabled adaptive radiation to an extraordinarily diverse class of ecological niches, and allows animals to move across vast distances. Sampling from multiple sensory modalities enables animals to acquire rich information to guide locomotion. Locomotion without sensory feedback is haphazard, therefore sensory and motor systems have evolved complex interactions to generate adaptive behavior. Notably, sensory-guided locomotion acts over broad spatial and temporal scales to permit goal-seeking behavior, whether to localize food by tracking an attractive odor plume or to search for a potential mate. How does the brain integrate multimodal stimuli over different temporal and spatial scales to effectively control behavior? In this review, we classify locomotion into three ordinally ranked hierarchical layers that act over distinct spatiotemporal scales: stabilization, motor primitives, and higher-order tasks, respectively. We discuss how these layers present unique challenges and opportunities for sensorimotor integration. We focus on recent advances in invertebrate locomotion due to their accessible neural and mechanical signals from the whole brain, limbs and sensors. Throughout, we emphasize neural-level description of computations for multimodal integration in genetic model systems, including the fruit fly, Drosophila melanogaster, and the yellow fever mosquito, Aedes aegypti. We identify that summation (e.g. gating) and weighting-which are inherent computations of spiking neurons-underlie multimodal integration across spatial and temporal scales, therefore suggesting collective strategies to guide locomotion.
Collapse
|
21
|
Abstract
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.
Collapse
|
22
|
A Neuroscience Levels of Explanation Approach to the Mind and the Brain. Front Comput Neurosci 2021; 15:649679. [PMID: 33897396 PMCID: PMC8058374 DOI: 10.3389/fncom.2021.649679] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/15/2021] [Indexed: 12/28/2022] Open
Abstract
The relation between mental states and brain states is important in computational neuroscience, and in psychiatry in which interventions with medication are made on brain states to alter mental states. The relation between the brain and the mind has puzzled philosophers for centuries. Here a neuroscience approach is proposed in which events at the sub-neuronal, neuronal, and neuronal network levels take place simultaneously to perform a computation that can be described at a high level as a mental state, with content about the world. It is argued that as the processes at the different levels of explanation take place at the same time, they are linked by a non-causal supervenient relationship: causality can best be described in brains as operating within but not between levels. This allows the supervenient (e.g., mental) properties to be emergent, though once understood at the mechanistic levels they may seem less emergent, and expected. This mind-brain theory allows mental events to be different in kind from the mechanistic events that underlie them; but does not lead one to argue that mental events cause brain events, or vice versa: they are different levels of explanation of the operation of the computational system. This approach may provide a way of thinking about brains and minds that is different from dualism and from reductive physicalism, and which is rooted in the computational processes that are fundamental to understanding brain and mental events, and that mean that the mental and mechanistic levels are linked by the computational process being performed. Explanations at the different levels of operation may be useful in different ways. For example, if we wish to understand how arithmetic is performed in the brain, description at the mental level of the algorithm being computed will be useful. But if the brain operates to result in mental disorders, then understanding the mechanism at the neural processing level may be more useful, in for example, the treatment of psychiatric disorders.
Collapse
|
23
|
Abstract
Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation's neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.
Collapse
|
24
|
Abstract
Sensorimotor coordination is thought to rely on cerebellar-based internal models for state estimation, but the underlying neural mechanisms and specific contribution of the cerebellar components is unknown. A central aspect of any inferential process is the representation of uncertainty or conversely precision characterizing the ensuing estimates. Here, we discuss the possible contribution of inhibition to the encoding of precision of neural representations in the granular layer of the cerebellar cortex. Within this layer, Golgi cells influence excitatory granule cells, and their action is critical in shaping information transmission downstream to Purkinje cells. In this review, we equate the ensuing excitation-inhibition balance in the granular layer with the outcome of a precision-weighted inferential process, and highlight the physiological characteristics of Golgi cell inhibition that are consistent with such computations.
Collapse
|
25
|
Abstract
Neural circuits are structured with layers of converging and diverging connectivity and selectivity-inducing nonlinearities at neurons and synapses. These components have the potential to hamper an accurate encoding of the circuit inputs. Past computational studies have optimized the nonlinearities of single neurons, or connection weights in networks, to maximize encoded information, but have not grappled with the simultaneous impact of convergent circuit structure and nonlinear response functions for efficient coding. Our approach is to compare model circuits with different combinations of convergence, divergence, and nonlinear neurons to discover how interactions between these components affect coding efficiency. We find that a convergent circuit with divergent parallel pathways can encode more information with nonlinear subunits than with linear subunits, despite the compressive loss induced by the convergence and the nonlinearities when considered separately.
Collapse
|
26
|
Criticality, Connectivity, and Neural Disorder: A Multifaceted Approach to Neural Computation. Front Comput Neurosci 2021; 15:611183. [PMID: 33643017 PMCID: PMC7902700 DOI: 10.3389/fncom.2021.611183] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 01/18/2021] [Indexed: 01/03/2023] Open
Abstract
It has been hypothesized that the brain optimizes its capacity for computation by self-organizing to a critical point. The dynamical state of criticality is achieved by striking a balance such that activity can effectively spread through the network without overwhelming it and is commonly identified in neuronal networks by observing the behavior of cascades of network activity termed "neuronal avalanches." The dynamic activity that occurs in neuronal networks is closely intertwined with how the elements of the network are connected and how they influence each other's functional activity. In this review, we highlight how studying criticality with a broad perspective that integrates concepts from physics, experimental and theoretical neuroscience, and computer science can provide a greater understanding of the mechanisms that drive networks to criticality and how their disruption may manifest in different disorders. First, integrating graph theory into experimental studies on criticality, as is becoming more common in theoretical and modeling studies, would provide insight into the kinds of network structures that support criticality in networks of biological neurons. Furthermore, plasticity mechanisms play a crucial role in shaping these neural structures, both in terms of homeostatic maintenance and learning. Both network structures and plasticity have been studied fairly extensively in theoretical models, but much work remains to bridge the gap between theoretical and experimental findings. Finally, information theoretical approaches can tie in more concrete evidence of a network's computational capabilities. Approaching neural dynamics with all these facets in mind has the potential to provide a greater understanding of what goes wrong in neural disorders. Criticality analysis therefore holds potential to identify disruptions to healthy dynamics, granted that robust methods and approaches are considered.
Collapse
|
27
|
Small-world EEG network analysis of functional connectivity in developmental dyslexia after visual training intervention. J Integr Neurosci 2020; 19:601-618. [PMID: 33378835 DOI: 10.31083/j.jin.2020.04.193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/07/2020] [Accepted: 11/12/2020] [Indexed: 11/06/2022] Open
Abstract
Aberrations in functional connectivity in children with developmental dyslexia have been found in electroencephalographic studies using graph analysis. How training with visual tasks can modify the functional semantic network in developmental dyslexia remains unclear. We investigate local and global topological properties of functional networks in multiple EEG frequency ranges based on a small-world propensity method in controls, pre- and post-training dyslexic children during visual word/pseudoword processing. Results indicated that the EEG network topology in dyslexics before the training was more integrated than controls, and after training - more segregated and similar to that of the controls in the theta (θ: 4-8), alpha (α: 8-13), beta (β1: 13-20; β2: 20-30), and gamma (γ1: 30-48; γ2: 52-70 Hz) bands for three graph measures. The pre-training dyslexics exhibited a reduced strength and betweenness centrality of the left anterior temporal and parietal regions in the θ, α, β1 and γ1-frequency bands, compared to the controls. The simultaneous appearance of hubs in the left hemisphere (or both hemispheres) at temporal and parietal (α-word/γ-pseudoword discrimination), temporal and middle frontal cortex (θ, α-word), parietal and middle frontal cortex (β1-word), parietal and occipitotemporal cortices (θ-pseudoword), identified in the EEG-based functional networks of normally developing children were not present in the networks of dyslexics. The hub distribution for dyslexics in the θ, α, and β1 bands became similar to that of the controls. The topological organization of functional networks and the less efficient network configuration (long characteristic path length) in dyslexics compared to the more optimal global organization in the controls was studied for the first time after remediation training.
Collapse
|
28
|
Central Vestibular Tuning Arises from Patterned Convergence of Otolith Afferents. Neuron 2020; 108:748-762.e4. [PMID: 32937099 DOI: 10.1016/j.neuron.2020.08.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 07/09/2020] [Accepted: 08/19/2020] [Indexed: 01/31/2023]
Abstract
As sensory information moves through the brain, higher-order areas exhibit more complex tuning than lower areas. Though models predict that complexity arises via convergent inputs from neurons with diverse response properties, in most vertebrate systems, convergence has only been inferred rather than tested directly. Here, we measure sensory computations in zebrafish vestibular neurons across multiple axes in vivo. We establish that whole-cell physiological recordings reveal tuning of individual vestibular afferent inputs and their postsynaptic targets. Strong, sparse synaptic inputs can be distinguished by their amplitudes, permitting analysis of afferent convergence in vivo. An independent approach, serial-section electron microscopy, supports the inferred connectivity. We find that afferents with similar or differing preferred directions converge on central vestibular neurons, conferring more simple or complex tuning, respectively. Together, these results provide a direct, quantifiable demonstration of feedforward input convergence in vivo.
Collapse
|
29
|
Abstract
Spiking neural P systems (SNP systems) are a class of distributed and parallel computation models, which are inspired by the way in which neurons process information through spikes, where the integrate-and-fire behavior of neurons and the distribution of produced spikes are achieved by spiking rules. In this work, a novel mechanism for separately describing the integrate-and-fire behavior of neurons and the distribution of produced spikes, and a novel variant of the SNP systems, named evolution-communication SNP (ECSNP) systems, is proposed. More precisely, the integrate-and-fire behavior of neurons is achieved by spike-evolution rules, and the distribution of produced spikes is achieved by spike-communication rules. Then, the computational power of ECSNP systems is examined. It is demonstrated that ECSNP systems are Turing universal as number-generating devices. Furthermore, the computational power of ECSNP systems with a restricted form, i.e. the quantity of spikes in each neuron throughout a computation does not exceed some constant, is also investigated, and it is shown that such restricted ECSNP systems can only characterize the family of semilinear number sets. These results manifest that the capacity of neurons for information storage (i.e. the quantity of spikes) has a critical impact on the ECSNP systems to achieve a desired computational power.
Collapse
|
30
|
Decoding network-mediated retinal response to electrical stimulation: implications for fidelity of prosthetic vision. J Neural Eng 2020; 17. [PMID: 33108781 PMCID: PMC8284336 DOI: 10.1088/1741-2552/abc535] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 10/27/2020] [Indexed: 02/07/2023]
Abstract
Objective. Patients with photovoltaic subretinal implant PRIMA demonstrated letter acuity ∼0.1 logMAR worse than sampling limit for 100 μm pixels (1.3 logMAR) and performed slower than healthy subjects tested with equivalently pixelated images. To explore the underlying differences between natural and prosthetic vision, we compare the fidelity of retinal response to visual and subretinal electrical stimulation through single-cell modeling and ensemble decoding. Approach. Responses of retinal ganglion cells (RGCs) to optical or electrical white noise stimulation in healthy and degenerate rat retinas were recorded via multi-electrode array. Each RGC was fit with linear–nonlinear and convolutional neural network models. To characterize RGC noise, we compared statistics of spike-triggered averages (STAs) in RGCs responding to electrical or visual stimulation of healthy and degenerate retinas. At the population level, we constructed a linear decoder to determine the accuracy of the ensemble of RGCs on N-way discrimination tasks. Main results. Although computational models can match natural visual responses well (correlation ∼0.6), they fit significantly worse to spike timings elicited by electrical stimulation of the healthy retina (correlation ∼0.15). In the degenerate retina, response to electrical stimulation is equally bad. The signal-to-noise ratio of electrical STAs in degenerate retinas matched that of the natural responses when 78 ± 6.5% of the spikes were replaced with random timing. However, the noise in RGC responses contributed minimally to errors in ensemble decoding. The determining factor in accuracy of decoding was the number of responding cells. To compensate for fewer responding cells under electrical stimulation than in natural vision, more presentations of the same stimulus are required to deliver sufficient information for image decoding. Significance. Slower-than-natural pattern identification by patients with the PRIMA implant may be explained by the lower number of electrically activated cells than in natural vision, which is compensated by a larger number of the stimulus presentations.
Collapse
|
31
|
Editorial: The Embodied Brain: Computational Mechanisms of Integrated Sensorimotor Interactions With a Dynamic Environment. Front Comput Neurosci 2020; 14:53. [PMID: 32625074 PMCID: PMC7314992 DOI: 10.3389/fncom.2020.00053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 05/15/2020] [Indexed: 11/13/2022] Open
|
32
|
Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation. Neuron 2020; 107:745-758.e6. [PMID: 32516573 DOI: 10.1016/j.neuron.2020.05.020] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 12/25/2019] [Accepted: 05/11/2020] [Indexed: 12/21/2022]
Abstract
The supplementary motor area (SMA) is believed to contribute to higher order aspects of motor control. We considered a key higher order role: tracking progress throughout an action. We propose that doing so requires population activity to display low "trajectory divergence": situations with different future motor outputs should be distinct, even when present motor output is identical. We examined neural activity in SMA and primary motor cortex (M1) as monkeys cycled various distances through a virtual environment. SMA exhibited multiple response features that were absent in M1. At the single-neuron level, these included ramping firing rates and cycle-specific responses. At the population level, they included a helical population-trajectory geometry with shifts in the occupied subspace as movement unfolded. These diverse features all served to reduce trajectory divergence, which was much lower in SMA versus M1. Analogous population-trajectory geometry, also with low divergence, naturally arose in networks trained to internally guide multi-cycle movement.
Collapse
|
33
|
Abstract
The first patch-clamp recordings from the dendrites of human neocortical neurons have recently been reported by Beaulieu-Laroche et al. and Gidon et al. These studies have shown that human dendrites are electrically excitable, exhibiting backpropagating action potentials and fast dendritic calcium spikes. This new frontier highlights the potential for interspecies differences in the biophysics of dendritic computation.
Collapse
|
34
|
Electrical Compartmentalization in Neurons. Cell Rep 2020; 26:1759-1773.e7. [PMID: 30759388 DOI: 10.1016/j.celrep.2019.01.074] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/03/2018] [Accepted: 01/17/2019] [Indexed: 12/31/2022] Open
Abstract
The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.
Collapse
|
35
|
Abstract
Auditory cortex (AC) is necessary for the detection of brief gaps in ongoing sounds, but not for the detection of longer gaps or other stimuli such as tones or noise. It remains unclear why this is so, and what is special about brief gaps in particular. Here, we used both optogenetic suppression and conventional lesions to show that the cortical dependence of brief gap detection hinges specifically on gap termination. We then identified a cortico-collicular gap detection circuit that amplifies cortical gap termination responses before projecting to inferior colliculus (IC) to impact behavior. We found that gaps evoked off-responses and on-responses in cortical neurons, which temporally overlapped for brief gaps, but not long gaps. This overlap specifically enhanced cortical responses to brief gaps, whereas IC neurons preferred longer gaps. Optogenetic suppression of AC reduced collicular responses specifically to brief gaps, indicating that under normal conditions, the enhanced cortical representation of brief gaps amplifies collicular gap responses. Together these mechanisms explain how and why AC contributes to the behavioral detection of brief gaps, which are critical cues for speech perception, perceptual grouping, and auditory scene analysis.
Collapse
|
36
|
Emergence of an Invariant Representation of Texture in Primate Somatosensory Cortex. Cereb Cortex 2019; 30:3228-3239. [PMID: 31813989 PMCID: PMC7197205 DOI: 10.1093/cercor/bhz305] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 01/13/2023] Open
Abstract
A major function of sensory processing is to achieve neural representations of objects that are stable across changes in context and perspective. Small changes in exploratory behavior can lead to large changes in signals at the sensory periphery, thus resulting in ambiguous neural representations of objects. Overcoming this ambiguity is a hallmark of human object recognition across sensory modalities. Here, we investigate how the perception of tactile texture remains stable across exploratory movements of the hand, including changes in scanning speed, despite the concomitant changes in afferent responses. To this end, we scanned a wide range of everyday textures across the fingertips of rhesus macaques at multiple speeds and recorded the responses evoked in tactile nerve fibers and somatosensory cortical neurons (from Brodmann areas 3b, 1, and 2). We found that individual cortical neurons exhibit a wider range of speed-sensitivities than do nerve fibers. The resulting representations of speed and texture in cortex are more independent than are their counterparts in the nerve and account for speed-invariant perception of texture. We demonstrate that this separation of speed and texture information is a natural consequence of previously described cortical computations.
Collapse
|
37
|
Abstract
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN's governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present important considerations when using RNNs to model motor behavior in a delayed reach task. First, by varying the network's nonlinear activation and rate regularization, we show that RNNs reproducing single-neuron firing rate motifs may not adequately capture important population motifs. Second, we find that even when RNNs reproduce key neurophysiological features on both the single neuron and population levels, they can do so through distinctly different dynamical mechanisms. To distinguish between these mechanisms, we show that an RNN consistent with a previously proposed dynamical mechanism is more robust to input noise. Finally, we show that these dynamics are sufficient for the RNN to generalize to tasks it was not trained on. Together, these results emphasize important considerations when using RNN models to probe neural dynamics.NEW & NOTEWORTHY Artificial neurons in a recurrent neural network (RNN) may resemble empirical single-unit activity but not adequately capture important features on the neural population level. Dynamics of RNNs can be visualized in low-dimensional projections to provide insight into the RNN's dynamical mechanism. RNNs trained in different ways may reproduce neurophysiological motifs but do so with distinctly different mechanisms. RNNs trained to only perform a delayed reach task can generalize to perform tasks where the target is switched or the target location is changed.
Collapse
|
38
|
An Adaptive-Threshold Mechanism for Odor Sensation and Animal Navigation. Neuron 2019; 105:534-548.e13. [PMID: 31761709 DOI: 10.1016/j.neuron.2019.10.034] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 05/31/2019] [Accepted: 10/27/2019] [Indexed: 01/01/2023]
Abstract
Identifying the environmental information and computations that drive sensory detection is key for understanding animal behavior. Using experimental and theoretical analysis of AWCON, a well-described olfactory neuron in C. elegans, here we derive a general and broadly useful model that matches stimulus history to odor sensation and behavioral responses. We show that AWCON sensory activity is regulated by an absolute signal threshold that continuously adapts to odor history, allowing animals to compare present and past odor concentrations. The model predicts sensory activity and probabilistic behavior during animal navigation in different odor gradients and across a broad stimulus regime. Genetic studies demonstrate that the cGMP-dependent protein kinase EGL-4 determines the timescale of threshold adaptation, defining a molecular basis for a critical model feature. The adaptive threshold model efficiently filters stimulus noise, allowing reliable sensation in fluctuating environments, and represents a feedforward sensory mechanism with implications for other sensory systems.
Collapse
|
39
|
Independent representations of ipsilateral and contralateral limbs in primary motor cortex. eLife 2019; 8:e48190. [PMID: 31625506 PMCID: PMC6824843 DOI: 10.7554/elife.48190] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 10/17/2019] [Indexed: 02/04/2023] Open
Abstract
Several lines of research demonstrate that primary motor cortex (M1) is principally involved in controlling the contralateral side of the body. However, M1 activity has been correlated with both contralateral and ipsilateral limb movements. Why does ipsilaterally-related activity not cause contralateral motor output? To address this question, we trained monkeys to counter mechanical loads applied to their right and left limbs. We found >50% of M1 neurons had load-related activity for both limbs. Contralateral loads evoked changes in activity ~10ms sooner than ipsilateral loads. We also found corresponding population activities were distinct, with contralateral activity residing in a subspace that was orthogonal to the ipsilateral activity. Thus, neural responses for the contralateral limb can be extracted without interference from the activity for the ipsilateral limb, and vice versa. Our results show that M1 activity unrelated to downstream motor targets can be segregated from activity related to the downstream motor output.
Collapse
|
40
|
Automatic eye blink artifact removal for EEG based on a sparse coding technique for assessing major mental disorders. J Integr Neurosci 2019; 18:217-229. [PMID: 31601069 DOI: 10.31083/j.jin.2019.03.164] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 07/23/2019] [Indexed: 11/06/2022] Open
Abstract
In the electroencephalogram recorded data are often confounded with artifacts, especially in the case of eye blinks. Different methods for artifact detection and removal are discussed in the literature, including automatic detection and removal. Here, an automatic method of eye blink detection and correction is proposed where sparse coding is used for an electroencephalogram dataset. In this method, a hybrid dictionary based on a ridgelet transformation is used to capture prominent features by analyzing independent components extracted from a different number of electroencephalogram channels. In this study, the proposed method has been tested and validated with five different datasets for artifact detection and correction. Results show that the proposed technique is promising as it successfully extracted the exact locations of eye blinking artifacts. The accuracy of the method (automatic detection) is 89.6% which represents a better estimate than that obtained by an extreme machine learning classifier.
Collapse
|
41
|
Cracking the Function of Layers in the Sensory Cortex. Neuron 2019; 100:1028-1043. [PMID: 30521778 DOI: 10.1016/j.neuron.2018.10.032] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 08/08/2018] [Accepted: 10/18/2018] [Indexed: 12/24/2022]
Abstract
Understanding how cortical activity generates sensory perceptions requires a detailed dissection of the function of cortical layers. Despite our relatively extensive knowledge of their anatomy and wiring, we have a limited grasp of what each layer contributes to cortical computation. We need to develop a theory of cortical function that is rooted solidly in each layer's component cell types and fine circuit architecture and produces predictions that can be validated by specific perturbations. Here we briefly review the progress toward such a theory and suggest an experimental road map toward this goal. We discuss new methods for the all-optical interrogation of cortical layers, for correlating in vivo function with precise identification of transcriptional cell type, and for mapping local and long-range activity in vivo with synaptic resolution. The new technologies that can crack the function of cortical layers are finally on the immediate horizon.
Collapse
|
42
|
Stellate Cells in the Medial Entorhinal Cortex Are Required for Spatial Learning. Cell Rep 2019; 22:1313-1324. [PMID: 29386117 PMCID: PMC5809635 DOI: 10.1016/j.celrep.2018.01.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/05/2017] [Accepted: 01/02/2018] [Indexed: 11/24/2022] Open
Abstract
Spatial learning requires estimates of location that may be obtained by path integration or from positional cues. Grid and other spatial firing patterns of neurons in the superficial medial entorhinal cortex (MEC) suggest roles in behavioral estimation of location. However, distinguishing the contributions of path integration and cue-based signals to spatial behaviors is challenging, and the roles of identified MEC neurons are unclear. We use virtual reality to dissociate linear path integration from other strategies for behavioral estimation of location. We find that mice learn to path integrate using motor-related self-motion signals, with accuracy that decreases steeply as a function of distance. We show that inactivation of stellate cells in superficial MEC impairs spatial learning in virtual reality and in a real world object location recognition task. Our results quantify contributions of path integration to behavior and corroborate key predictions of models in which stellate cells contribute to location estimation. Mice learn to estimate location by path integration and cue-based strategies Motor-related self-motion signals are used for path integration Accuracy of path integration decreases with distance Stellate cells in medial entorhinal cortex are required for spatial learning
Collapse
|
43
|
Single-Cell Membrane Potential Fluctuations Evince Network Scale-Freeness and Quasicriticality. J Neurosci 2019; 39:4738-4759. [PMID: 30952810 DOI: 10.1523/jneurosci.3163-18.2019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 03/01/2019] [Accepted: 03/25/2019] [Indexed: 11/21/2022] Open
Abstract
What information single neurons receive about general neural circuit activity is a fundamental question for neuroscience. Somatic membrane potential (V m) fluctuations are driven by the convergence of synaptic inputs from a diverse cross-section of upstream neurons. Furthermore, neural activity is often scale-free, implying that some measurements should be the same, whether taken at large or small scales. Together, convergence and scale-freeness support the hypothesis that single V m recordings carry useful information about high-dimensional cortical activity. Conveniently, the theory of "critical branching networks" (one purported explanation for scale-freeness) provides testable predictions about scale-free measurements that are readily applied to V m fluctuations. To investigate, we obtained whole-cell current-clamp recordings of pyramidal neurons in visual cortex of turtles with unknown genders. We isolated fluctuations in V m below the firing threshold and analyzed them by adapting the definition of "neuronal avalanches" (i.e., spurts of population spiking). The V m fluctuations which we analyzed were scale-free and consistent with critical branching. These findings recapitulated results from large-scale cortical population data obtained separately in complementary experiments using microelectrode arrays described previously (Shew et al., 2015). Simultaneously recorded single-unit local field potential did not provide a good match, demonstrating the specific utility of V m Modeling shows that estimation of dynamical network properties from neuronal inputs is most accurate when networks are structured as critical branching networks. In conclusion, these findings extend evidence of critical phenomena while also establishing subthreshold pyramidal neuron V m fluctuations as an informative gauge of high-dimensional cortical population activity.SIGNIFICANCE STATEMENT The relationship between membrane potential (V m) dynamics of single neurons and population dynamics is indispensable to understanding cortical circuits. Just as important to the biophysics of computation are emergent properties such as scale-freeness, where critical branching networks offer insight. This report makes progress on both fronts by comparing statistics from single-neuron whole-cell recordings with population statistics obtained with microelectrode arrays. Not only are fluctuations of somatic V m scale-free, they match fluctuations of population activity. Thus, our results demonstrate appropriation of the brain's own subsampling method (convergence of synaptic inputs) while extending the range of fundamental evidence for critical phenomena in neural systems from the previously observed mesoscale (fMRI, LFP, population spiking) to the microscale, namely, V m fluctuations.
Collapse
|
44
|
Understanding Sensory Information Processing Through Simultaneous Multi-area Population Recordings. Front Neural Circuits 2019; 12:115. [PMID: 30687020 PMCID: PMC6333685 DOI: 10.3389/fncir.2018.00115] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 12/13/2018] [Indexed: 12/20/2022] Open
Abstract
The goal of sensory neuroscience is to understand how the brain creates its myriad of representations of the world, and uses these representations to produce perception and behavior. Circuits of neurons in spatially segregated regions of brain tissue have distinct functional specializations, and these regions are connected to form a functional processing hierarchy. Advances in technology for recording neuronal activity from multiple sites in multiple cortical areas mean that we are now able to collect data that reflects how information is transformed within and between connected members of this hierarchy. This advance is an important step in understanding the brain because, after the sensory organs have transduced a physical signal, every processing stage takes the activity of other neurons as its input, not measurements of the physical world. However, as we explore the potential of studying how populations of neurons in multiple areas respond in concert, we must also expand both the analytical tools that we use to make sense of these data and the scope of the theories that we attempt to define. In this article, we present an overview of some of the most promising analytical approaches for making inferences from population recordings in multiple brain areas, such as dimensionality reduction and measuring changes in correlated variability, and examine how they may be used to address longstanding questions in sensory neuroscience.
Collapse
|
45
|
Precise Synaptic Balance in the Zebrafish Homolog of Olfactory Cortex. Neuron 2018; 100:669-683.e5. [PMID: 30318416 DOI: 10.1016/j.neuron.2018.09.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 07/04/2018] [Accepted: 09/06/2018] [Indexed: 01/04/2023]
Abstract
Neuronal computations critically depend on the connectivity rules that govern the convergence of excitatory and inhibitory synaptic signals onto individual neurons. To examine the functional synaptic organization of a distributed memory network, we performed voltage clamp recordings in telencephalic area Dp of adult zebrafish, the homolog of olfactory cortex. In neurons of posterior Dp, odor stimulation evoked large, recurrent excitatory and inhibitory inputs that established a transient state of high conductance and synaptic balance. Excitation and inhibition in individual neurons were co-tuned to different odors and correlated on slow and fast timescales. This precise synaptic balance implies specific connectivity among Dp neurons, despite the absence of an obvious topography. Precise synaptic balance stabilizes activity patterns in different directions of coding space and in time while preserving high bandwidth. The coordinated connectivity of excitatory and inhibitory subnetworks in Dp therefore supports fast recurrent memory operations.
Collapse
|
46
|
Range, routing and kinetics of rod signaling in primate retina. eLife 2018; 7:38281. [PMID: 30299254 PMCID: PMC6218188 DOI: 10.7554/elife.38281] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 09/22/2018] [Indexed: 11/29/2022] Open
Abstract
Stimulus- or context-dependent routing of neural signals through parallel pathways can permit flexible processing of diverse inputs. For example, work in mouse shows that rod photoreceptor signals are routed through several retinal pathways, each specialized for different light levels. This light-level-dependent routing of rod signals has been invoked to explain several human perceptual results, but it has not been tested in primate retina. Here, we show, surprisingly, that rod signals traverse the primate retina almost exclusively through a single pathway – the dedicated rod bipolar pathway. Identical experiments in mouse and primate reveal substantial differences in how rod signals traverse the retina. These results require reevaluating human perceptual results in terms of flexible computation within this single pathway. This includes a prominent speeding of rod signals with light level – which we show is inherited directly from the rod photoreceptors themselves rather than from different pathways with distinct kinetics.
Collapse
|
47
|
Editorial: Neural Computation in Embodied Closed-Loop Systems for the Generation of Complex Behavior: From Biology to Technology. Front Neurorobot 2018; 12:53. [PMID: 30214405 PMCID: PMC6125336 DOI: 10.3389/fnbot.2018.00053] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 08/09/2018] [Indexed: 11/13/2022] Open
|
48
|
Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation. When you swing a tennis racket, muscles in your arm contract in a specific sequence. For this to happen, millions of neurons in your brain and spinal cord must fire to make those muscles contract. If you swing the racket a second time, the same muscles in your arm will contract again. But the firing pattern of the underlying neurons will probably be different. This phenomenon, in which different patterns of neural activity generate the same outcome, is called neural redundancy. Neural redundancy allows a set of neurons to perform multiple tasks at once. For example, the same neurons may drive an arm movement while simultaneously planning the next activity. But does performing a given task constrain how often different patterns of neural activity can be produced? If so, this would limit whether other tasks could be carried out at the same time. To address this, Hennig et al. trained macaque monkeys to use a brain-computer interface (BCI). This is a device that reads out electrical brain activity and converts it into signals that can be used to control another device. The key advantage of a BCI is that the redundant activity patterns are precisely known. The monkeys learned to use their brain activity, via the BCI, to move a cursor on a computer screen in different directions. The results revealed that monkeys could only produce a limited number of different patterns of brain activity for a given BCI cursor movement. This suggests that the ability of a group of neurons to multitask is restricted. For example, if the same set of neurons is involved in both planning and performing movements, then an animal’s ability to plan a future movement will depend on the one it is currently performing. BCIs can help patients who have suffered stroke or paralysis. They enable patients to use their brain activity to control a computer or even robotic limbs. Understanding how the brain controls BCIs will help us improve their performance and deepen our knowledge of how the brain plans and performs movements. This might include designing BCIs that allow users to multitask more effectively.
Collapse
|
49
|
Linear Summation Underlies Direction Selectivity in Drosophila. Neuron 2018; 99:680-688.e4. [PMID: 30057202 DOI: 10.1016/j.neuron.2018.07.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 05/24/2018] [Accepted: 07/02/2018] [Indexed: 11/28/2022]
Abstract
While linear mechanisms lay the foundations of feature selectivity in many brain areas, direction selectivity in the elementary motion detector (EMD) of the fly has become a paradigm of nonlinear neuronal computation. We have bridged this divide by demonstrating that linear spatial summation can generate direction selectivity in the fruit fly Drosophila. Using linear systems analysis and two-photon imaging of a genetically encoded voltage indicator, we measure the emergence of direction-selective (DS) voltage signals in the Drosophila OFF pathway. Our study is a direct, quantitative investigation of the algorithm underlying directional signals, with the striking finding that linear spatial summation is sufficient for the emergence of direction selectivity. A linear stage of the fly EMD strongly resembles similar computations in vertebrate visual cortex, demands a reappraisal of the role of upstream nonlinearities, and implicates the voltage-to-calcium transformation in the refinement of feature selectivity in this system. VIDEO ABSTRACT.
Collapse
|
50
|
A Tutorial for Information Theory in Neuroscience. eNeuro 2018; 5:ENEURO.0052-18.2018. [PMID: 30211307 PMCID: PMC6131830 DOI: 10.1523/eneuro.0052-18.2018] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/10/2018] [Accepted: 05/30/2018] [Indexed: 11/21/2022] Open
Abstract
Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study.
Collapse
|