1
|
Voges N, Lima V, Hausmann J, Brovelli A, Battaglia D. Decomposing Neural Circuit Function into Information Processing Primitives. J Neurosci 2024; 44:e0157232023. [PMID: 38050070 PMCID: PMC10866194 DOI: 10.1523/jneurosci.0157-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 09/01/2023] [Accepted: 09/19/2023] [Indexed: 12/06/2023] Open
Abstract
It is challenging to measure how specific aspects of coordinated neural dynamics translate into operations of information processing and, ultimately, cognitive functions. An obstacle is that simple circuit mechanisms-such as self-sustained or propagating activity and nonlinear summation of inputs-do not directly give rise to high-level functions. Nevertheless, they already implement simple the information carried by neural activity. Here, we propose that distinct functions, such as stimulus representation, working memory, or selective attention, stem from different combinations and types of low-level manipulations of information or information processing primitives. To test this hypothesis, we combine approaches from information theory with simulations of multi-scale neural circuits involving interacting brain regions that emulate well-defined cognitive functions. Specifically, we track the information dynamics emergent from patterns of neural dynamics, using quantitative metrics to detect where and when information is actively buffered, transferred or nonlinearly merged, as possible modes of low-level processing (storage, transfer and modification). We find that neuronal subsets maintaining representations in working memory or performing attentional gain modulation are signaled by their boosted involvement in operations of information storage or modification, respectively. Thus, information dynamic metrics, beyond detecting which network units participate in cognitive processing, also promise to specify how and when they do it, that is, through which type of primitive computation, a capability that may be exploited for the analysis of experimental recordings.
Collapse
Affiliation(s)
- Nicole Voges
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Vinicius Lima
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
| | - Johannes Hausmann
- R&D Department, Hyland Switzerland Sarl, Corcelles NE 2035, Switzerland
| | - Andrea Brovelli
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Demian Battaglia
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
- University of Strasbourg Institute for Advanced Studies (USIAS), Strasbourg 67000, France
| |
Collapse
|
2
|
Kay JW, Schulz JM, Phillips WA. A Comparison of Partial Information Decompositions Using Data from Real and Simulated Layer 5b Pyramidal Cells. Entropy 2022; 24:e24081021. [PMID: 35893001 PMCID: PMC9394329 DOI: 10.3390/e24081021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/18/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023]
Abstract
Partial information decomposition allows the joint mutual information between an output and a set of inputs to be divided into components that are synergistic or shared or unique to each input. We consider five different decompositions and compare their results using data from layer 5b pyramidal cells in two different studies. The first study was on the amplification of somatic action potential output by apical dendritic input and its regulation by dendritic inhibition. We find that two of the decompositions produce much larger estimates of synergy and shared information than the others, as well as large levels of unique misinformation. When within-neuron differences in the components are examined, the five methods produce more similar results for all but the shared information component, for which two methods produce a different statistical conclusion from the others. There are some differences in the expression of unique information asymmetry among the methods. It is significantly larger, on average, under dendritic inhibition. Three of the methods support a previous conclusion that apical amplification is reduced by dendritic inhibition. The second study used a detailed compartmental model to produce action potentials for many combinations of the numbers of basal and apical synaptic inputs. Decompositions of the entire data set produce similar differences to those in the first study. Two analyses of decompositions are conducted on subsets of the data. In the first, the decompositions reveal a bifurcation in unique information asymmetry. For three of the methods, this suggests that apical drive switches to basal drive as the strength of the basal input increases, while the other two show changing mixtures of information and misinformation. Decompositions produced using the second set of subsets show that all five decompositions provide support for properties of cooperative context-sensitivity—to varying extents.
Collapse
Affiliation(s)
- Jim W. Kay
- School of Mathematics and Statistics, University of Glasgow, Glasgow G12 8QQ, UK
- Correspondence:
| | - Jan M. Schulz
- Department of Biomedicine, University of Basel, 4001 Basel, Switzerland;
| | | |
Collapse
|
3
|
Newman EL, Varley TF, Parakkattu VK, Sherrill SP, Beggs JM. Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition. Entropy 2022; 24:e24070930. [PMID: 35885153 PMCID: PMC9319160 DOI: 10.3390/e24070930] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022]
Abstract
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.
Collapse
Affiliation(s)
- Ehren L. Newman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
- Correspondence: (E.L.N.); (T.F.V.)
| | - Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
- Correspondence: (E.L.N.); (T.F.V.)
| | - Vibin K. Parakkattu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
| | | | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, USA;
| |
Collapse
|
4
|
Makkeh A, Chicharro D, Theis DO, Vicente R. MAXENT3D_PID: An Estimator for the Maximum-Entropy Trivariate Partial Information Decomposition. Entropy (Basel) 2019; 21:862. [PMCID: PMC7515392 DOI: 10.3390/e21090862] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 08/27/2019] [Indexed: 07/04/2023]
Abstract
Partial information decomposition (PID) separates the contributions of sources about a target into unique, redundant, and synergistic components of information. In essence, PID answers the question of “who knows what” of a system of random variables and hence has applications to a wide spectrum of fields ranging from social to biological sciences. The paper presents MaxEnt3D_Pid, an algorithm that computes the PID of three sources, based on a recently-proposed maximum entropy measure, using convex optimization (cone programming). We describe the algorithm and its associated software utilization and report the results of various experiments assessing its accuracy. Moreover, the paper shows that a hierarchy of bivariate and trivariate PID allows obtaining the finer quantities of the trivariate partial information measure.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Daniel Chicharro
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto (TN), Italy
| | - Dirk Oliver Theis
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Raul Vicente
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| |
Collapse
|
5
|
Camino-Pontes B, Diez I, Jimenez-Marin A, Rasero J, Erramuzpe A, Bonifazi P, Stramaglia S, Swinnen S, Cortes JM. Interaction Information Along Lifespan of the Resting Brain Dynamics Reveals a Major Redundant Role of the Default Mode Network. Entropy (Basel) 2018; 20:e20100742. [PMID: 33265831 PMCID: PMC7512305 DOI: 10.3390/e20100742] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 09/07/2018] [Accepted: 09/24/2018] [Indexed: 01/06/2023]
Abstract
Interaction Information (II) generalizes the univariate Shannon entropy to triplets of variables, allowing the detection of redundant (R) or synergetic (S) interactions in dynamical networks. Here, we calculated II from functional magnetic resonance imaging data and asked whether R or S vary across brain regions and along lifespan. Preserved along lifespan, we found high overlapping between the pattern of high R and the default mode network, whereas high values of S were overlapping with different cognitive domains, such as spatial and temporal memory, emotion processing and motor skills. Moreover, we have found a robust balance between R and S among different age intervals, indicating informational compensatory mechanisms in brain networks.
Collapse
Affiliation(s)
- Borja Camino-Pontes
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
| | - Ibai Diez
- Functional Neurology Research Group, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA
- Gordon Center, Department of Nuclear Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA
- Neurotechnology Laboratory, Tecnalia Health Department, 48160 Derio, Spain
| | - Antonio Jimenez-Marin
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
| | - Javier Rasero
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
| | - Asier Erramuzpe
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
| | - Paolo Bonifazi
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
- IKERBASQUE: The Basque Foundation for Science, 48013 Bilbao, Spain
| | | | - Stephan Swinnen
- Movement Control and Neuroplasticity Research Group, Department of Movement Sciences, KU Leuven, 3001 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
| | - Jesus M. Cortes
- Computational Neuroimaging Lab, Biocruces Health Research Institute, 48903 Barakaldo, Spain
- IKERBASQUE: The Basque Foundation for Science, 48013 Bilbao, Spain
- Department of Cell Biology and Histology, University of the Basque Country, 48940 Leioa, Spain
- Correspondence: ; Tel.: +34-94600600 (ext. 5199)
| |
Collapse
|