1
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife 2024; 12:RP90069. [PMID: 38695551 PMCID: PMC11065423 DOI: 10.7554/elife.90069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| |
Collapse
|
2
|
Naud R, Longtin A. Connecting levels of analysis in the computational era. J Physiol 2024; 602:417-420. [PMID: 38071740 DOI: 10.1113/jp286013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 02/02/2024] Open
Affiliation(s)
- Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - André Longtin
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
3
|
Erratum: Covariance properties under natural image transformations for the generalised Gaussian derivative model for visual receptive fields. Front Comput Neurosci 2023; 17:1282093. [PMID: 37727152 PMCID: PMC10505710 DOI: 10.3389/fncom.2023.1282093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 08/23/2023] [Indexed: 09/21/2023] Open
Abstract
[This corrects the article DOI: 10.3389/fncom.2023.1189949.].
Collapse
|
4
|
Lindeberg T. Covariance properties under natural image transformations for the generalised Gaussian derivative model for visual receptive fields. Front Comput Neurosci 2023; 17:1189949. [PMID: 37398936 PMCID: PMC10311448 DOI: 10.3389/fncom.2023.1189949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/23/2023] [Indexed: 07/04/2023] Open
Abstract
The property of covariance, also referred to as equivariance, means that an image operator is well-behaved under image transformations, in the sense that the result of applying the image operator to a transformed input image gives essentially a similar result as applying the same image transformation to the output of applying the image operator to the original image. This paper presents a theory of geometric covariance properties in vision, developed for a generalised Gaussian derivative model of receptive fields in the primary visual cortex and the lateral geniculate nucleus, which, in turn, enable geometric invariance properties at higher levels in the visual hierarchy. It is shown how the studied generalised Gaussian derivative model for visual receptive fields obeys true covariance properties under spatial scaling transformations, spatial affine transformations, Galilean transformations and temporal scaling transformations. These covariance properties imply that a vision system, based on image and video measurements in terms of the receptive fields according to the generalised Gaussian derivative model, can, to first order of approximation, handle the image and video deformations between multiple views of objects delimited by smooth surfaces, as well as between multiple views of spatio-temporal events, under varying relative motions between the objects and events in the world and the observer. We conclude by describing implications of the presented theory for biological vision, regarding connections between the variabilities of the shapes of biological visual receptive fields and the variabilities of spatial and spatio-temporal image structures under natural image transformations. Specifically, we formulate experimentally testable biological hypotheses as well as needs for measuring population statistics of receptive field characteristics, originating from predictions from the presented theory, concerning the extent to which the shapes of the biological receptive fields in the primary visual cortex span the variabilities of spatial and spatio-temporal image structures induced by natural image transformations, based on geometric covariance properties.
Collapse
Affiliation(s)
- Tony Lindeberg
- Computational Brain Science Lab, Division of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
5
|
DiTullio RW, Parthiban C, Piasini E, Chaudhari P, Balasubramanian V, Cohen YE. Time as a supervisor: temporal regularity and auditory object learning. Front Comput Neurosci 2023; 17:1150300. [PMID: 37216064 PMCID: PMC10192587 DOI: 10.3389/fncom.2023.1150300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 03/30/2023] [Indexed: 05/24/2023] Open
Abstract
Sensory systems appear to learn to transform incoming sensory information into perceptual representations, or "objects," that can inform and guide behavior with minimal explicit supervision. Here, we propose that the auditory system can achieve this goal by using time as a supervisor, i.e., by learning features of a stimulus that are temporally regular. We will show that this procedure generates a feature space sufficient to support fundamental computations of auditory perception. In detail, we consider the problem of discriminating between instances of a prototypical class of natural auditory objects, i.e., rhesus macaque vocalizations. We test discrimination in two ethologically relevant tasks: discrimination in a cluttered acoustic background and generalization to discriminate between novel exemplars. We show that an algorithm that learns these temporally regular features affords better or equivalent discrimination and generalization than conventional feature-selection algorithms, i.e., principal component analysis and independent component analysis. Our findings suggest that the slow temporal features of auditory stimuli may be sufficient for parsing auditory scenes and that the auditory brain could utilize these slowly changing temporal features.
Collapse
Affiliation(s)
- Ronald W. DiTullio
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, United States
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, United States
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - Chetan Parthiban
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, United States
| | - Eugenio Piasini
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
| | - Pratik Chaudhari
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Vijay Balasubramanian
- David Rittenhouse Laboratory, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, United States
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
- Santa Fe Institute, Santa Fe, NM, United States
| | - Yale E. Cohen
- Departments of Otorhinolaryngology, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
6
|
Zajzon B, Dahmen D, Morrison A, Duarte R. Signal denoising through topographic modularity of neural circuits. eLife 2023; 12:77009. [PMID: 36700545 PMCID: PMC9981157 DOI: 10.7554/elife.77009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/25/2023] [Indexed: 01/27/2023] Open
Abstract
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen UniversityAachenGermany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen UniversityAachenGermany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
7
|
Bittner SR, Palmigiano A, Piet AT, Duan CA, Brody CD, Miller KD, Cunningham J. Interrogating theoretical models of neural computation with emergent property inference. eLife 2021; 10:e56265. [PMID: 34323690 PMCID: PMC8321557 DOI: 10.7554/elife.56265] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 06/30/2021] [Indexed: 11/13/2022] Open
Abstract
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.
Collapse
Affiliation(s)
- Sean R Bittner
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | | | - Alex T Piet
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Allen Institute for Brain ScienceSeattleUnited States
| | - Chunyu A Duan
- Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
| | - Carlos D Brody
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Howard Hughes Medical InstituteChevy ChaseUnited States
| | - Kenneth D Miller
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - John Cunningham
- Department of Statistics, Columbia UniversityNew YorkUnited States
| |
Collapse
|
8
|
Fagerholm ED, Foulkes WMC, Gallero-Salas Y, Helmchen F, Friston KJ, Leech R, Moran RJ. Neural Systems Under Change of Scale. Front Comput Neurosci 2021; 15:643148. [PMID: 33967728 PMCID: PMC8099030 DOI: 10.3389/fncom.2021.643148] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/26/2021] [Indexed: 11/30/2022] Open
Abstract
We derive a theoretical construct that allows for the characterisation of both scalable and scale free systems within the dynamic causal modelling (DCM) framework. We define a dynamical system to be "scalable" if the same equation of motion continues to apply as the system changes in size. As an example of such a system, we simulate planetary orbits varying in size and show that our proposed methodology can be used to recover Kepler's third law from the timeseries. In contrast, a "scale free" system is one in which there is no characteristic length scale, meaning that images of such a system are statistically unchanged at different levels of magnification. As an example of such a system, we use calcium imaging collected in murine cortex and show that the dynamical critical exponent, as defined in renormalization group theory, can be estimated in an empirical biological setting. We find that a task-relevant region of the cortex is associated with higher dynamical critical exponents in task vs. spontaneous states and vice versa for a task-irrelevant region.
Collapse
Affiliation(s)
- Erik D. Fagerholm
- Department of Neuroimaging, King’s College London, London, United Kingdom
| | - W. M. C. Foulkes
- Department of Physics, Imperial College London, London, United Kingdom
| | - Yasir Gallero-Salas
- Brain Research Institute, University of Zürich, Zurich, Switzerland
- Neuroscience Center Zurich, Zurich, Switzerland
| | - Fritjof Helmchen
- Brain Research Institute, University of Zürich, Zurich, Switzerland
- Neuroscience Center Zurich, Zurich, Switzerland
| | - Karl J. Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Robert Leech
- Department of Neuroimaging, King’s College London, London, United Kingdom
| | - Rosalyn J. Moran
- Department of Neuroimaging, King’s College London, London, United Kingdom
| |
Collapse
|
9
|
Agmon H, Burak Y. A theory of joint attractor dynamics in the hippocampus and the entorhinal cortex accounts for artificial remapping and grid cell field-to-field variability. eLife 2020; 9:56894. [PMID: 32779570 PMCID: PMC7447444 DOI: 10.7554/elife.56894] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 08/07/2020] [Indexed: 01/04/2023] Open
Abstract
The representation of position in the mammalian brain is distributed across multiple neural populations. Grid cell modules in the medial entorhinal cortex (MEC) express activity patterns that span a low-dimensional manifold which remains stable across different environments. In contrast, the activity patterns of hippocampal place cells span distinct low-dimensional manifolds in different environments. It is unknown how these multiple representations of position are coordinated. Here, we develop a theory of joint attractor dynamics in the hippocampus and the MEC. We show that the system exhibits a coordinated, joint representation of position across multiple environments, consistent with global remapping in place cells and grid cells. In addition, our model accounts for recent experimental observations that lack a mechanistic explanation: variability in the firing rate of single grid cells across firing fields, and artificial remapping of place cells under depolarization, but not under hyperpolarization, of layer II stellate cells of the MEC.
Collapse
Affiliation(s)
- Haggai Agmon
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
10
|
Mahrach A, Chen G, Li N, van Vreeswijk C, Hansel D. Mechanisms underlying the response of mouse cortical networks to optogenetic manipulation. eLife 2020; 9:e49967. [PMID: 31951197 PMCID: PMC7012611 DOI: 10.7554/elife.49967] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 12/25/2019] [Indexed: 12/28/2022] Open
Abstract
GABAergic interneurons can be subdivided into three subclasses: parvalbumin positive (PV), somatostatin positive (SOM) and serotonin positive neurons. With principal cells (PCs) they form complex networks. We examine PCs and PV responses in mouse anterior lateral motor cortex (ALM) and barrel cortex (S1) upon PV photostimulation in vivo. In ALM layer five and S1, the PV response is paradoxical: photoexcitation reduces their activity. This is not the case in ALM layer 2/3. We combine analytical calculations and numerical simulations to investigate how these results constrain the architecture. Two-population models cannot explain the results. Four-population networks with V1-like architecture account for the data in ALM layer 2/3 and layer 5. Our data in S1 can be explained if SOM neurons receive inputs only from PCs and PV neurons. In both four-population models, the paradoxical effect implies not too strong recurrent excitation. It is not evidence for stabilization by inhibition.
Collapse
Affiliation(s)
- Alexandre Mahrach
- CNRS-UMR 8002, Integrative Neuroscience and Cognition CenterParisFrance
| | - Guang Chen
- Department of NeuroscienceBaylor College of MedicineHoustonUnited States
| | - Nuo Li
- Department of NeuroscienceBaylor College of MedicineHoustonUnited States
| | | | - David Hansel
- CNRS-UMR 8002, Integrative Neuroscience and Cognition CenterParisFrance
| |
Collapse
|
11
|
Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The Dynamical Regime of Sensory Cortex: Stable Dynamics around a Single Stimulus-Tuned Attractor Account for Patterns of Noise Variability. Neuron 2019; 98:846-860.e5. [PMID: 29772203 PMCID: PMC5971207 DOI: 10.1016/j.neuron.2018.04.017] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 02/14/2018] [Accepted: 04/12/2018] [Indexed: 12/16/2022]
Abstract
Correlated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states (“attractors”) or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic “stabilized supralinear network”), best explains these modulations. Given the supralinear input/output functions of cortical neurons, increased stimulus drive strengthens effective network connectivity. This shifts the balance from interactions that amplify variability to suppressive inhibitory feedback, quenching correlated variability around more strongly driven steady states. Comparing to previously published and original data analyses, we show that this mechanism, unlike previous proposals, uniquely accounts for the spatial patterns and fast temporal dynamics of variability suppression. Specifying the cortical operating regime is key to understanding the computations underlying perception. A simple network model explains stimulus-tuning of cortical variability suppression Inhibition stabilizes recurrently interacting neurons with supralinear I/O functions Stimuli strengthen inhibitory stabilization around a stable state, quenching variability Single-trial V1 data are compatible with this model and rules out competing proposals
Collapse
Affiliation(s)
- Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | - Yashar Ahmadian
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Centre de Neurophysique, Physiologie, et Pathologie, CNRS, 75270 Paris Cedex 06, France; Institute of Neuroscience, Department of Biology and Mathematics, University of Oregon, Eugene, OR 97403, USA
| | - Daniel B Rubin
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK; Department of Cognitive Science, Central European University, 1051 Budapest, Hungary
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA
| |
Collapse
|
12
|
Abstract
Upon encountering a novel environment, an animal must construct a consistent environmental map, as well as an internal estimate of its position within that map, by combining information from two distinct sources: self-motion cues and sensory landmark cues. How do known aspects of neural circuit dynamics and synaptic plasticity conspire to accomplish this feat? Here we show analytically how a neural attractor model that combines path integration of self-motion cues with Hebbian plasticity in synaptic weights from landmark cells can self-organize a consistent map of space as the animal explores an environment. Intriguingly, the emergence of this map can be understood as an elastic relaxation process between landmark cells mediated by the attractor network. Moreover, our model makes several experimentally testable predictions, including (i) systematic path-dependent shifts in the firing fields of grid cells toward the most recently encountered landmark, even in a fully learned environment; (ii) systematic deformations in the firing fields of grid cells in irregular environments, akin to elastic deformations of solids forced into irregular containers; and (iii) the creation of topological defects in grid cell firing patterns through specific environmental manipulations. Taken together, our results conceptually link known aspects of neurons and synapses to an emergent solution of a fundamental computational problem in navigation, while providing a unified account of disparate experimental observations.
Collapse
|
13
|
Abstract
During foraging, animals decide how long to stay at a patch and harvest reward, and then, they move with certain vigor to another location. How does the brain decide when to leave, and how does it determine the speed of the ensuing movement? Here, we considered the possibility that both the decision-making and the motor control problems aimed to maximize a single normative utility: the sum of all rewards acquired minus all efforts expended divided by total time. This optimization could be achieved if the brain compared a local measure of utility with its history. To test the theory, we examined behavior of people as they gazed at images: they chose how long to look at the image (harvesting information) and then moved their eyes to another image, controlling saccade speed. We varied reward via image content and effort via image eccentricity, and then, we measured how these changes affected decision making (gaze duration) and motor control (saccade speed). After a history of low rewards, people increased gaze duration and decreased saccade speed. In anticipation of future effort, they lowered saccade speed and increased gaze duration. After a history of high effort, they elevated their saccade speed and increased gaze duration. Therefore, the theory presented a principled way with which the brain may control two aspects of behavior: movement speed and harvest duration. Our experiments confirmed many (but not all) of the predictions, suggesting that harvest duration and movement speed, fundamental aspects of behavior during foraging, may be governed by a shared principle of control.
Collapse
|
14
|
Kass RE, Amari SI, Arai K, Brown EN, Diekman CO, Diesmann M, Doiron B, Eden UT, Fairhall AL, Fiddyment GM, Fukai T, Grün S, Harrison MT, Helias M, Nakahara H, Teramae JN, Thomas PJ, Reimers M, Rodu J, Rotstein HG, Shea-Brown E, Shimazaki H, Shinomoto S, Yu BM, Kramer MA. Computational Neuroscience: Mathematical and Statistical Perspectives. Annu Rev Stat Appl 2018; 5:183-214. [PMID: 30976604 PMCID: PMC6454918 DOI: 10.1146/annurev-statistics-041715-033733] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Mathematical and statistical models have played important roles in neuroscience, especially by describing the electrical activity of neurons recorded individually, or collectively across large networks. As the field moves forward rapidly, new challenges are emerging. For maximal effectiveness, those working to advance computational neuroscience will need to appreciate and exploit the complementary strengths of mechanistic theory and the statistical paradigm.
Collapse
Affiliation(s)
- Robert E Kass
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | - Shun-Ichi Amari
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Emery N Brown
- Massachusetts Institute of Technology, Cambridge, MA, USA, 02139
- Harvard Medical School, Boston, MA, USA, 02115
| | | | - Markus Diesmann
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Brent Doiron
- University of Pittsburgh, Pittsburgh, PA, USA, 15260
| | - Uri T Eden
- Boston University, Boston, MA, USA, 02215
| | | | | | - Tomoki Fukai
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | - Sonja Grün
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | | | - Moritz Helias
- Jülich Research Centre, Jülich, Germany, 52428
- RWTH Aachen University, Aachen, Germany, 52062
| | - Hiroyuki Nakahara
- RIKEN Brain Science Institute, Wako, Saitama Prefecture, Japan, 351-0198
| | | | - Peter J Thomas
- Case Western Reserve University, Cleveland, OH, USA, 44106
| | - Mark Reimers
- Michigan State University, East Lansing, MI, USA, 48824
| | - Jordan Rodu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | | | | - Hideaki Shimazaki
- Honda Research Institute Japan, Wako, Saitama Prefecture, Japan, 351-0188
- Kyoto University, Kyoto, Kyoto Prefecture, Japan, 606-8502
| | | | - Byron M Yu
- Carnegie Mellon University, Pittsburgh, PA, USA, 15213;
| | | |
Collapse
|
15
|
Abstract
Nerve conduction in unmyelinated fibers has long been described based on the equivalent circuit model and cable theory. However, without the change in ionic concentration gradient across the membrane, there would be no generation or propagation of the action potential. Based on this concept, we employ a new conductive model focusing on the distribution of voltage-gated sodium ion channels and Coulomb force between electrolytes. Based on this new model, the propagation of the nerve conduction was suggested to take place far before the generation of action potential at each channel. We theoretically showed that propagation of action potential, which is enabled by the increasing Coulomb force produced by inflowing sodium ions, from one sodium ion channel to the next sodium channel would be inversely proportionate to the density of sodium channels on the axon membrane. Because the longitudinal number of sodium ion channel would be proportionate to the square root of channel density, the conduction velocity of unmyelinated nerves is theoretically shown to be proportionate to the square root of channel density. Also, from a viewpoint of equilibrium state of channel importation and degeneration, channel density was suggested to be proportionate to axonal diameter. Based on these simple basis, conduction velocity in unmyelinated nerves was theoretically shown to be proportionate to the square root of axonal diameter. This new model would also enable us to acquire more accurate and understandable vision on the phenomena in unmyelinated nerves in addition to the conventional electric circuit model and cable theory.
Collapse
Affiliation(s)
- Tetsuya Akaishi
- Department of Neurology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Neurology, National Hospital Organization Yonezawa Hospital, Yonezawa, Japan
| |
Collapse
|
16
|
Abstract
Brain function involves the activity of neuronal populations. Much recent effort has been devoted to measuring the activity of neuronal populations in different parts of the brain under various experimental conditions. Population activity patterns contain rich structure, yet many studies have focused on measuring pairwise relationships between members of a larger population-termed noise correlations. Here we review recent progress in understanding how these correlations affect population information, how information should be quantified, and what mechanisms may give rise to correlations. As population coding theory has improved, it has made clear that some forms of correlation are more important for information than others. We argue that this is a critical lesson for those interested in neuronal population responses more generally: Descriptions of population responses should be motivated by and linked to well-specified function. Within this context, we offer suggestions of where current theoretical frameworks fall short.
Collapse
Affiliation(s)
- Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461; .,Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Ruben Coen-Cagli
- Department of Basic Neuroscience, University of Geneva, CH-1211 Geneva, Switzerland; ,
| | - Ingmar Kanitscheider
- Department of Basic Neuroscience, University of Geneva, CH-1211 Geneva, Switzerland; , .,Center of Learning and Memory, The University of Texas at Austin, Austin, Texas 78712; .,Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, CH-1211 Geneva, Switzerland; , .,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627.,Gatsby Computational Neuroscience Unit, University College London, W1T 4JG London, United Kingdom
| |
Collapse
|
17
|
Abstract
Spatial navigation in mammals is based on building a mental representation of their environment-a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment-the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level.
Collapse
Affiliation(s)
- Andrey Babichev
- Department of Pediatrics Neurology, Baylor College of Medicine, Jan and Dan Duncan Neurological Research InstituteHouston, TX, USA; Department of Computational and Applied Mathematics, Rice UniversityHouston, TX, USA
| | - Sen Cheng
- Mercator Research Group "Structure of Memory" and Department of Psychology, Ruhr-University Bochum Bochum, Germany
| | - Yuri A Dabaghian
- Department of Pediatrics Neurology, Baylor College of Medicine, Jan and Dan Duncan Neurological Research InstituteHouston, TX, USA; Department of Computational and Applied Mathematics, Rice UniversityHouston, TX, USA
| |
Collapse
|
18
|
Willshaw DJ, Dayan P, Morris RGM. Memory, modelling and Marr: a commentary on Marr (1971) 'Simple memory: a theory of archicortex'. Philos Trans R Soc Lond B Biol Sci 2015; 370:rstb.2014.0383. [PMID: 25750246 PMCID: PMC4360131 DOI: 10.1098/rstb.2014.0383] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
David Marr's theory of the archicortex, a brain structure now more commonly known as the hippocampus and hippocampal formation, is an epochal contribution to theoretical neuroscience. Addressing the problem of how information about 10 000 events could be stored in the archicortex during the day so that they can be retrieved using partial information and then transferred to the neocortex overnight, the paper presages a whole wealth of later empirical and theoretical work, proving impressively prescient. Despite this impending success, Marr later apparently grew dissatisfied with this style of modelling, but he went on to make seminal suggestions that continue to resonate loudly throughout the field of theoretical neuroscience. We describe Marr's theory of the archicortex and his theory of theories, setting them into their original and a contemporary context, and assessing their impact. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society.
Collapse
Affiliation(s)
- D J Willshaw
- School of Informatics, University of Edinburgh, Edinburgh EH8 9LE, UK
| | - P Dayan
- Gatsby Computational Neuroscience Unit, University College London, London WC1N 3AR, UK
| | - R G M Morris
- Centre for Cognitive and Neural Systems, University of Edinburgh, Edinburgh EH8 9JZ, UK
| |
Collapse
|
19
|
Wei XX, Prentice J, Balasubramanian V. A principle of economy predicts the functional architecture of grid cells. eLife 2015; 4:e08362. [PMID: 26335200 PMCID: PMC4616244 DOI: 10.7554/elife.08362] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Accepted: 09/01/2015] [Indexed: 11/13/2022] Open
Abstract
Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths. DOI:http://dx.doi.org/10.7554/eLife.08362.001 In the 1930s, neuroscientists studying how rodents find their way through a maze proposed that the animals could construct an internal map of the maze inside their heads. The map was thought to enable the animals to navigate between familiar locations and also to identify shortcuts and alternative routes whenever familiar ones were blocked. In the 1960s, recordings of electrical activity in the rat brain provided the first clues as to which nerve cells form this spatial map. In a region of the brain called the hippocampus, nerve cells called ‘place cells’ are active whenever the rat finds itself in a specific location. However, place cells alone are not able to support all types of navigation. Some spatial tasks also require cells in a region of the brain called the medial entorhinal cortex (MEC), which supplies most of the information that the hippocampus receives. Cells in the MEC called ‘grid cells’ represent two-dimensional space as a repeating grid of triangles. A given grid cell is activated if the animal is located at a particular distance and angle away from the center of any of these triangles. The size of the triangles in these grids varies systematically throughout the MEC. Individual grid cells at one end of the structure encode space in finer detail than grid cells at the opposite end. Wei et al. have now used mathematical modeling to explore how grid cells are organized. The model assumes that the brain seeks to encode space at whatever resolution an animal requires using as few nerve cells as possible. The model successfully reproduces several known features of grid cells, including the triangular shape of the grid, and the fact that the size of the triangles increases in steps of a specific size across the MEC. In addition to providing a mathematical basis for the way that grid cells are organized in the brain, the model makes a number of testable predictions. These include predictions of the number of grid cells in the rat brain, as well as the pattern that grid cells adopt in three-dimensions: a question that is currently being studied in bats. Wei et al.'s findings suggest that the code used by the grid to represent space is an analog of a decimal number system—except that space is not subdivided by factors of 10 to form decimal ‘digits’, but by a quantity related to a famous constant in the field of mathematics called Euler's number. DOI:http://dx.doi.org/10.7554/eLife.08362.002
Collapse
Affiliation(s)
- Xue-Xin Wei
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| | - Jason Prentice
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
| | - Vijay Balasubramanian
- Department of Physics, University of Pennsylvania, Philadelphia, United States.,Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
20
|
Abstract
In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli.
Collapse
|
21
|
Hall PA, Fong GT. Temporal self-regulation theory: a neurobiologically informed model for physical activity behavior. Front Hum Neurosci 2015; 9:117. [PMID: 25859196 PMCID: PMC4373277 DOI: 10.3389/fnhum.2015.00117] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 02/16/2015] [Indexed: 11/18/2022] Open
Abstract
Dominant explanatory models for physical activity behavior are limited by the exclusion of several important components, including temporal dynamics, ecological forces, and neurobiological factors. The latter may be a critical omission, given the relevance of several aspects of cognitive function for the self-regulatory processes that are likely required for consistent implementation of physical activity behavior in everyday life. This narrative review introduces temporal self-regulation theory (TST; Hall and Fong, 2007, 2013) as a new explanatory model for physical activity behavior. Important features of the model include consideration of the default status of the physical activity behavior, as well as the disproportionate influence of temporally proximal behavioral contingencies. Most importantly, the TST model proposes positive feedback loops linking executive function (EF) and the performance of physical activity behavior. Specifically, those with relatively stronger executive control (and optimized brain structures supporting it, such as the dorsolateral prefrontal cortex (PFC)) are able to implement physical activity with more consistency than others, which in turn serves to strengthen the executive control network itself. The TST model has the potential to explain everyday variants of incidental physical activity, sport-related excellence via capacity for deliberate practice, and variability in the propensity to schedule and implement exercise routines.
Collapse
Affiliation(s)
- Peter A Hall
- Faculty of Applied Health Sciences, University of Waterloo Waterloo, ON, Canada
| | - Geoffrey T Fong
- Department of Psychology, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
22
|
Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons. Front Comput Neurosci 2015; 9:13. [PMID: 25729361 PMCID: PMC4325917 DOI: 10.3389/fncom.2015.00013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 01/27/2015] [Indexed: 12/18/2022] Open
Abstract
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.
Collapse
Affiliation(s)
- Dimitri Probst
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Mihai A. Petrovici
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Ilja Bytschok
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Johannes Bill
- Institute for Theoretical Computer Science, Graz University of TechnologyGraz, Austria
| | - Dejan Pecevski
- Institute for Theoretical Computer Science, Graz University of TechnologyGraz, Austria
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, University of HeidelbergHeidelberg, Germany
| |
Collapse
|
23
|
Bekolay T, Bergstra J, Hunsberger E, Dewolf T, Stewart TC, Rasmussen D, Choo X, Voelker AR, Eliasmith C. Nengo: a Python tool for building large-scale functional brain models. Front Neuroinform 2014; 7:48. [PMID: 24431999 PMCID: PMC3880998 DOI: 10.3389/fninf.2013.00048] [Citation(s) in RCA: 111] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2013] [Accepted: 12/18/2013] [Indexed: 11/13/2022] Open
Abstract
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Collapse
Affiliation(s)
- Trevor Bekolay
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - James Bergstra
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Eric Hunsberger
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Travis Dewolf
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Daniel Rasmussen
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Xuan Choo
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | | | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
24
|
Affiliation(s)
- Bernard J Baars
- Theoretical Neurobiology, The Neurosciences Institute La Jolla, CA, USA
| |
Collapse
|
25
|
Schibli K, D'Angiulli A. The social emotional developmental and cognitive neuroscience of socioeconomic gradients: laboratory, population, cross-cultural and community developmental approaches. Front Hum Neurosci 2013; 7:788. [PMID: 24302907 PMCID: PMC3831166 DOI: 10.3389/fnhum.2013.00788] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2013] [Accepted: 10/30/2013] [Indexed: 11/15/2022] Open
Affiliation(s)
- Kylie Schibli
- Neuroscience of Imagery Cognition and Emotion Research Lab, Carleton University, Neuroscience Ottawa, ON, Canada
| | | |
Collapse
|
26
|
Abstract
A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.
Collapse
Affiliation(s)
- Tony Lindeberg
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of Technology, 100 44 , Stockholm, Sweden,
| |
Collapse
|
27
|
Boly M, Seth AK, Wilke M, Ingmundson P, Baars B, Laureys S, Edelman DB, Tsuchiya N. Consciousness in humans and non-human animals: recent advances and future directions. Front Psychol 2013; 4:625. [PMID: 24198791 PMCID: PMC3814086 DOI: 10.3389/fpsyg.2013.00625] [Citation(s) in RCA: 105] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2013] [Accepted: 08/24/2013] [Indexed: 12/30/2022] Open
Abstract
This joint article reflects the authors' personal views regarding noteworthy advances in the neuroscience of consciousness in the last 10 years, and suggests what we feel may be promising future directions. It is based on a small conference at the Samoset Resort in Rockport, Maine, USA, in July of 2012, organized by the Mind Science Foundation of San Antonio, Texas. Here, we summarize recent advances in our understanding of subjectivity in humans and other animals, including empirical, applied, technical, and conceptual insights. These include the evidence for the importance of fronto-parietal connectivity and of “top-down” processes, both of which enable information to travel across distant cortical areas effectively, as well as numerous dissociations between consciousness and cognitive functions, such as attention, in humans. In addition, we describe the development of mental imagery paradigms, which made it possible to identify covert awareness in non-responsive subjects. Non-human animal consciousness research has also witnessed substantial advances on the specific role of cortical areas and higher order thalamus for consciousness, thanks to important technological enhancements. In addition, much progress has been made in the understanding of non-vertebrate cognition relevant to possible conscious states. Finally, major advances have been made in theories of consciousness, and also in their comparison with the available evidence. Along with reviewing these findings, each author suggests future avenues for research in their field of investigation.
Collapse
Affiliation(s)
- Melanie Boly
- Department of Neurology, University of Wisconsin Madison, WI, USA ; Department of Psychiatry, Center for Sleep and Consciousness, University of Wisconsin Madison, WI, USA ; Coma Science Group, Cyclotron Research Centre and Neurology Department, University of Liege and CHU Sart Tilman Hospital Liege, Belgium
| | | | | | | | | | | | | | | |
Collapse
|
28
|
D'Angiulli A, Lipina SJ, Olesinska A. Explicit and implicit issues in the developmental cognitive neuroscience of social inequality. Front Hum Neurosci 2012; 6:254. [PMID: 22973216 PMCID: PMC3434357 DOI: 10.3389/fnhum.2012.00254] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2012] [Accepted: 08/20/2012] [Indexed: 11/13/2022] Open
Abstract
The appearance of developmental cognitive neuroscience (DCN) in the socioeconomic status (SES) research arena is hugely transformative, but challenging. We review challenges rooted in the implicit and explicit assumptions informing this newborn field. We provide balanced theoretical alternatives on how hypothesized psychological processes map onto the brain (e.g., problem of localization) and how experimental phenomena at multiple levels of analysis (e.g., behavior, cognition and the brain) could be related. We therefore examine unclear issues regarding the existing perspectives on poverty and their relationships with low SES, the evidence of low-SES adaptive functioning, historical precedents of the "alternate pathways" (neuroplasticity) interpretation of learning disabilities related to low-SES and the notion of deficit, issues of "normativity" and validity in findings of neurocognitive differences between children from different SES, and finally alternative interpretations of the complex relationship between IQ and SES. Particularly, we examine the extent to which the available laboratory results may be interpreted as showing that cognitive performance in low-SES children reflects cognitive and behavioral deficits as a result of growing up in specific environmental or cultural contexts, and how the experimental findings should be interpreted for the design of different types of interventions-particularly those related to educational practices-or translated to the public-especially the media. Although a cautionary tone permeates many studies, still, a potential deficit attribution-i.e., low-SES is associated with cognitive and behavioral developmental deficits-seems almost an inevitable implicit issue with ethical implications. Finally, we sketch the agenda for an ecological DCN, suggesting recommendations to advance the field, specifically, to minimize equivocal divulgation and maximize ethically responsible translation.
Collapse
Affiliation(s)
- Amedeo D'Angiulli
- Department of Neuroscience, Carleton UniversityOttawa, ON, Canada
- The Institute of Interdisciplinary Studies, Carleton UniversityOttawa, ON, Canada
| | - Sebastian J. Lipina
- Unidad de Neurobiología Aplicada (UNA, CEMIC-CONICET)Argentina
- Centro de Investigaciones Psicopedagógicas Aplicadas (CIPA-UNSAM)Argentina
| | - Alice Olesinska
- Department of Neuroscience, Carleton UniversityOttawa, ON, Canada
| |
Collapse
|
29
|
Abstract
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Collapse
Affiliation(s)
- Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | | | | |
Collapse
|