1
|
Macauley M, Youngs N. The Case for Algebraic Biology: from Research to Education. Bull Math Biol 2020; 82:115. [PMID: 32816124 DOI: 10.1007/s11538-020-00789-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Accepted: 08/03/2020] [Indexed: 02/03/2023]
Abstract
Though it goes without saying that linear algebra is fundamental to mathematical biology, polynomial algebra is less visible. In this article, we will give a brief tour of four diverse biological problems where multivariate polynomials play a central role-a subfield that is sometimes called algebraic biology. Namely, these topics include biochemical reaction networks, Boolean models of gene regulatory networks, algebraic statistics and genomics, and place fields in neuroscience. After that, we will summarize the history of discrete and algebraic structures in mathematical biology, from their early appearances in the late 1960s to the current day. Finally, we will discuss the role of algebraic biology in the modern classroom and curriculum, including resources in the literature and relevant software. Our goal is to make this article widely accessible, reaching the mathematical biologist who knows no algebra, the algebraist who knows no biology, and especially the interested student who is curious about the synergy between these two seemingly unrelated fields.
Collapse
Affiliation(s)
- Matthew Macauley
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, 29634, USA.
| | - Nora Youngs
- Department of Mathematics and Statistics, Colby College, Waterville, ME, 04901, USA
| |
Collapse
|
2
|
Johnston WJ, Palmer SE, Freedman DJ. Nonlinear mixed selectivity supports reliable neural computation. PLoS Comput Biol 2020; 16:e1007544. [PMID: 32069273 PMCID: PMC7048320 DOI: 10.1371/journal.pcbi.1007544] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 02/28/2020] [Accepted: 11/12/2019] [Indexed: 12/17/2022] Open
Abstract
Neuronal activity in the brain is variable, yet both perception and behavior are generally reliable. How does the brain achieve this? Here, we show that the conjunctive coding of multiple stimulus features, commonly known as nonlinear mixed selectivity, may be used by the brain to support reliable information transmission using unreliable neurons. Nonlinearly mixed feature representations have been observed throughout primary sensory, decision-making, and motor brain areas. In these areas, different features are almost always nonlinearly mixed to some degree, rather than represented separately or with only additive (linear) mixing, which we refer to as pure selectivity. Mixed selectivity has been previously shown to support flexible linear decoding for complex behavioral tasks. Here, we show that it has another important benefit: in many cases, it makes orders of magnitude fewer decoding errors than pure selectivity even when both forms of selectivity use the same number of spikes. This benefit holds for sensory, motor, and more abstract, cognitive representations. Further, we show experimental evidence that mixed selectivity exists in the brain even when it does not enable behaviorally useful linear decoding. This suggests that nonlinear mixed selectivity may be a general coding scheme exploited by the brain for reliable and efficient neural computation. Neurons in the brain are unreliable, while both perception and behavior are generally reliable. In this work, we study how the neural population response to sensory, motor, and cognitive features can produce this reliability. Across the brain, single neurons have been shown to respond to particular conjunctions of multiple features, termed nonlinear mixed selectivity. In this work, we show that populations of these mixed selective neurons lead to many fewer decoding errors than populations without mixed selectivity, even when both neural codes are given the same number of spikes. We show that the reliability benefits from mixed selectivity are quite general, holding under different assumptions about metabolic costs and neural noise as well as for both categorical and sensory errors. Further, previous theoretical work has shown that mixed selectivity enables the learning of complex behaviors with simple decoders. Through the analysis of neural data, we show that the brain implements mixed selectivity even when it would not serve this purpose. Thus, we argue that the brain also implements mixed selectivity to exploit its general benefits for reliable and efficient neural computation.
Collapse
Affiliation(s)
- W. Jeffrey Johnston
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- * E-mail:
| | - Stephanie E. Palmer
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, Illinois, United States of America
- Department of Physics, The University of Chicago, Chicago, Illinois, United States of America
| | - David J. Freedman
- Graduate Program in Computational Neuroscience, The University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, The University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
3
|
Curto C, Gross E, Jeffries J, Morrison K, Rosen Z, Shiu A, Youngs N. Algebraic signatures of convex and non-convex codes. JOURNAL OF PURE AND APPLIED ALGEBRA 2019; 223:3919-3940. [PMID: 31534273 PMCID: PMC6750060 DOI: 10.1016/j.jpaa.2018.12.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A convex code is a binary code generated by the pattern of intersections of a collection of open convex sets in some Euclidean space. Convex codes are relevant to neuroscience as they arise from the activity of neurons that have convex receptive fields. In this paper, we use algebraic methods to determine if a code is convex. Specifically, we use the neural ideal of a code, which is a generalization of the Stanley-Reisner ideal. Using the neural ideal together with its standard generating set, the canonical form, we provide algebraic signatures of certain families of codes that are non-convex. We connect these signatures to the precise conditions on the arrangement of sets that prevent the codes from being convex. Finally, we also provide algebraic signatures for some families of codes that are convex, including the class of intersection-complete codes. These results allow us to detect convexity and non-convexity in a variety of situations, and point to some interesting open questions.
Collapse
Affiliation(s)
- Carina Curto
- Department of Mathematics, The Pennsylvania State University, University Park, PA 16802
| | - Elizabeth Gross
- Department of Mathematics, University of Hawai’i at Mānoa, Honolulu, HI 96822
| | - Jack Jeffries
- Department of Mathematics, University of Michigan, Ann Arbor, MI 48109
| | - Katherine Morrison
- School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639
| | - Zvi Rosen
- Department of Mathematical Sciences, Florida Atlantic University, Boca Raton, FL 33431
| | - Anne Shiu
- Department of Mathematics, Texas A&M University, College Station, TX 77843
| | - Nora Youngs
- Department of Mathematics and Statistics, Colby College, Waterville, Maine 04901
| |
Collapse
|
4
|
Schwartz DM, Koyluoglu OO. On the Organization of Grid and Place Cells: Neural Denoising via Subspace Learning. Neural Comput 2019; 31:1519-1550. [PMID: 31260389 DOI: 10.1162/neco_a_01208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Place cells in the hippocampus (HC) are active when an animal visits a certain location (referred to as a place field) within an environment. Grid cells in the medial entorhinal cortex (MEC) respond at multiple locations, with firing fields that form a periodic and hexagonal tiling of the environment. The joint activity of grid and place cell populations, as a function of location, forms a neural code for space. In this article, we develop an understanding of the relationships between coding theoretically relevant properties of the combined activity of these populations and how these properties limit the robustness of this representation to noise-induced interference. These relationships are revisited by measuring the performances of biologically realizable algorithms implemented by networks of place and grid cell populations, as well as constraint neurons, which perform denoising operations. Contributions of this work include the investigation of coding theoretic limitations of the mammalian neural code for location and how communication between grid and place cell networks may improve the accuracy of each population's representation. Simulations demonstrate that denoising mechanisms analyzed here can significantly improve the fidelity of this neural representation of space. Furthermore, patterns observed in connectivity of each population of simulated cells predict that anti-Hebbian learning drives decreases in inter-HC-MEC connectivity along the dorsoventral axis.
Collapse
Affiliation(s)
- David M Schwartz
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ 85719, U.S.A.
| | - O Ozan Koyluoglu
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| |
Collapse
|
5
|
Hillar CJ, Tran NM. Robust Exponential Memory in Hopfield Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:1. [PMID: 29340803 PMCID: PMC5770423 DOI: 10.1186/s13408-017-0056-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 11/22/2017] [Indexed: 06/07/2023]
Abstract
The Hopfield recurrent neural network is a classical auto-associative model of memory, in which collections of symmetrically coupled McCulloch-Pitts binary neurons interact to perform emergent computation. Although previous researchers have explored the potential of this network to solve combinatorial optimization problems or store reoccurring activity patterns as attractors of its deterministic dynamics, a basic open problem is to design a family of Hopfield networks with a number of noise-tolerant memories that grows exponentially with neural population size. Here, we discover such networks by minimizing probability flow, a recently proposed objective for estimating parameters in discrete maximum entropy models. By descending the gradient of the convex probability flow, our networks adapt synaptic weights to achieve robust exponential storage, even when presented with vanishingly small numbers of training patterns. In addition to providing a new set of low-density error-correcting codes that achieve Shannon's noisy channel bound, these networks also efficiently solve a variant of the hidden clique problem in computer science, opening new avenues for real-world applications of computational models originating from biology.
Collapse
|
6
|
Loback A, Prentice J, Ioffe M, Berry Ii M. Noise-Robust Modes of the Retinal Population Code Have the Geometry of "Ridges" and Correspond to Neuronal Communities. Neural Comput 2017; 29:3119-3180. [PMID: 28957022 DOI: 10.1162/neco_a_01011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
An appealing new principle for neural population codes is that correlations among neurons organize neural activity patterns into a discrete set of clusters, which can each be viewed as a noise-robust population codeword. Previous studies assumed that these codewords corresponded geometrically with local peaks in the probability landscape of neural population responses. Here, we analyze multiple data sets of the responses of approximately 150 retinal ganglion cells and show that local probability peaks are absent under broad, nonrepeated stimulus ensembles, which are characteristic of natural behavior. However, we find that neural activity still forms noise-robust clusters in this regime, albeit clusters with a different geometry. We start by defining a soft local maximum, which is a local probability maximum when constrained to a fixed spike count. Next, we show that soft local maxima are robustly present and can, moreover, be linked across different spike count levels in the probability landscape to form a ridge. We found that these ridges comprise combinations of spiking and silence in the neural population such that all of the spiking neurons are members of the same neuronal community, a notion from network theory. We argue that a neuronal community shares many of the properties of Donald Hebb's classic cell assembly and show that a simple, biologically plausible decoding algorithm can recognize the presence of a specific neuronal community.
Collapse
Affiliation(s)
- Adrianna Loback
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jason Prentice
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Mark Ioffe
- Physics Department, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Michael Berry Ii
- Princeton Neuroscience Institute and Molecular Biology Department, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
7
|
Severa W, Parekh O, James CD, Aimone JB. A Combinatorial Model for Dentate Gyrus Sparse Coding. Neural Comput 2017; 29:94-117. [DOI: 10.1162/neco_a_00905] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Finally, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.
Collapse
Affiliation(s)
- William Severa
- Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87185, U.S.A
| | - Ojas Parekh
- Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87185, U.S.A
| | - Conrad D. James
- Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87185, U.S.A
| | - James B. Aimone
- Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87185, U.S.A
| |
Collapse
|
8
|
Curto C, Morrison K. Pattern Completion in Symmetric Threshold-Linear Networks. Neural Comput 2016; 28:2825-2852. [PMID: 27391688 DOI: 10.1162/neco_a_00869] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Threshold-linear networks are a common class of firing rate models that describe recurrent interactions among neurons. Unlike their linear counterparts, these networks generically possess multiple stable fixed points (steady states), making them viable candidates for memory encoding and retrieval. In this work, we characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons. In the case of symmetric networks, we prove the following antichain property: if a set of neurons [Formula: see text] is the support of a stable fixed point, then no proper subset or superset of [Formula: see text] can support a stable fixed point. Symmetric threshold-linear networks thus appear to be well suited for pattern completion, since the dynamics are guaranteed not to get stuck in a subset or superset of a stored pattern. We also show that for any graph G, we can construct a network whose stable fixed points correspond precisely to the maximal cliques of G. As an application, we design network decoders for place field codes and demonstrate their efficacy for error correction and pattern completion. The proofs of our main results build on the theory of permitted sets in threshold-linear networks, including recently developed connections to classical distance geometry.
Collapse
Affiliation(s)
- Carina Curto
- Department of Mathematics, Pennsylvania State University, University Park, PA 16802, U.S.A.
| | - Katherine Morrison
- Department of Mathematics, Pennsylvania State University, University Park, PA 16802, U.S.A., and School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, U.S.A.
| |
Collapse
|
9
|
Ganmor E, Segev R, Schneidman E. A thesaurus for a neural population code. eLife 2015; 4. [PMID: 26347983 PMCID: PMC4562117 DOI: 10.7554/elife.06134] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 08/02/2015] [Indexed: 11/15/2022] Open
Abstract
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI:http://dx.doi.org/10.7554/eLife.06134.001 Our ability to perceive the world is dependent on information from our senses being passed between different parts of the brain. The information is encoded as patterns of electrical pulses or ‘spikes’, which other brain regions must be able to decipher. Cracking this code would thus enable us to predict the patterns of nerve impulses that would occur in response to specific stimuli, and ‘decode’ which stimuli had produced particular patterns of impulses. This task is challenging in part because of its scale—vast numbers of stimuli are encoded by huge numbers of neurons that can send their spikes in many different combinations. Furthermore, neurons are inherently noisy and their response to identical stimuli may vary considerably in the number of spikes and their timing. This means that the brain cannot simply link a single unchanging pattern of firing with each stimulus, because these firing patterns are often distorted by biophysical noise. Ganmor et al. have now modeled the effects of noise in a network of neurons in the retina (found at the back of the eye), and, in doing so, have provided insights into how the brain solves this problem. This has brought us a step closer to cracking the neural code. First, 10 second video clips of natural scenes and artificial stimuli were played on a loop to a sample of retina taken from a salamander, and the responses of nearly 100 neurons in the sample were recorded for two hours. Dividing the 10 second clip into short segments provided a series of 500 stimuli, which the network had been exposed to more than 600 times. Ganmor et al. analyzed the responses of groups of 20 cells to each stimulus and found that physically similar firing patterns were not particularly likely to encode the same stimulus. This can be likened to the way that words such as ‘light’ and ‘night’ have similar structures but different meanings. Instead, the model reveals that each stimulus was represented by a cluster of firing patterns that bore little physical resemblance to one another, but which nevertheless conveyed the same meaning. To continue on with the previous example, this is similar to way that ‘light’ and ‘illumination’ have the same meaning but different structures. Ganmor et al. use these new data to map the organization of the ‘vocabulary’ of populations of cells the retina, and put together a kind of ‘thesaurus’ that enables new activity patterns of the retina to be decoded and could be used to crack the neural code. Furthermore, the organization of ‘synonyms’ is strikingly similar to codes that are favored in many forms of telecommunication. In these man-made codes, codewords that represent different items are chosen to be so distinct from each other that even if they were corrupted by noise, they could be correctly deciphered. Correspondingly, in the retina, patterns that carry the same meaning occupy a distinct area, and new patterns can be interpreted based on their proximity to these clusters. DOI:http://dx.doi.org/10.7554/eLife.06134.002
Collapse
Affiliation(s)
- Elad Ganmor
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Ronen Segev
- Department of Life Sciences, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Elad Schneidman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
10
|
Curto C, Degeratu A, Itskov V. Encoding binary neural codes in networks of threshold-linear neurons. Neural Comput 2013; 25:2858-903. [PMID: 23895048 DOI: 10.1162/neco_a_00504] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Networks of neurons in the brain encode preferred patterns of neural activity via their synaptic connections. Despite receiving considerable attention, the precise relationship between network connectivity and encoded patterns is still poorly understood. Here we consider this problem for networks of threshold-linear neurons whose computational function is to learn and store a set of binary patterns (e.g., a neural code) as "permitted sets" of the network. We introduce a simple encoding rule that selectively turns "on" synapses between neurons that coappear in one or more patterns. The rule uses synapses that are binary, in the sense of having only two states ("on" or "off"), but also heterogeneous, with weights drawn from an underlying synaptic strength matrix S. Our main results precisely describe the stored patterns that result from the encoding rule, including unintended "spurious" states, and give an explicit characterization of the dependence on S. In particular, we find that binary patterns are successfully stored in these networks when the excitatory connections between neurons are geometrically balanced--i.e., they satisfy a set of geometric constraints. Furthermore, we find that certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns. Interestingly, many commonly observed neural codes in cortical and hippocampal areas are natural in this sense. As an application, we construct networks that encode hippocampal place field codes nearly exactly, following presentation of only a small fraction of patterns. To obtain our results, we prove new theorems using classical ideas from convex and distance geometry, such as Cayley-Menger determinants, revealing a novel connection between these areas of mathematics and coding properties of neural networks.
Collapse
Affiliation(s)
- Carina Curto
- Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588, U.S.A.
| | | | | |
Collapse
|