1
|
Kumar G P, Panda R, Sharma K, Adarsh A, Annen J, Martial C, Faymonville ME, Laureys S, Sombrun C, Ganesan RA, Vanhaudenhuyse A, Gosseries O. Changes in high-order interaction measures of synergy and redundancy during non-ordinary states of consciousness induced by meditation, hypnosis, and auto-induced cognitive trance. Neuroimage 2024; 293:120623. [PMID: 38670442 DOI: 10.1016/j.neuroimage.2024.120623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/08/2024] [Accepted: 04/21/2024] [Indexed: 04/28/2024] Open
Abstract
High-order interactions are required across brain regions to accomplish specific cognitive functions. These functional interdependencies are reflected by synergistic information that can be obtained by combining the information from all the sources considered and redundant information (i.e., common information provided by all the sources). However, electroencephalogram (EEG) functional connectivity is limited to pairwise interactions thereby precluding the estimation of high-order interactions. In this multicentric study, we used measures of synergistic and redundant information to study in parallel the high-order interactions between five EEG electrodes during three non-ordinary states of consciousness (NSCs): Rajyoga meditation (RM), hypnosis, and auto-induced cognitive trance (AICT). We analyzed EEG data from 22 long-term Rajyoga meditators, nine volunteers undergoing hypnosis, and 21 practitioners of AICT. We here report the within-group changes in synergy and redundancy for each NSC in comparison with their respective baseline. During RM, synergy increased at the whole brain level in the delta and theta bands. Redundancy decreased in frontal, right central, and posterior electrodes in delta, and frontal, central, and posterior electrodes in beta1 and beta2 bands. During hypnosis, synergy decreased in mid-frontal, temporal, and mid-centro-parietal electrodes in the delta band. The decrease was also observed in the beta2 band in the left frontal and right parietal electrodes. During AICT, synergy decreased in delta and theta bands in left-frontal, right-frontocentral, and posterior electrodes. The decrease was also observed at the whole brain level in the alpha band. However, redundancy changes during hypnosis and AICT were not significant. The subjective reports of absorption and dissociation during hypnosis and AICT, as well as the mystical experience questionnaires during AICT, showed no correlation with the high-order measures. The proposed study is the first exploratory attempt to utilize the concepts of synergy and redundancy in NSCs. The differences in synergy and redundancy during different NSCs warrant further studies to relate the extracted measures with the phenomenology of the NSCs.
Collapse
Affiliation(s)
- Pradeep Kumar G
- MILE Lab, Department of Electrical Engineering, Indian Institute of Science, Bengaluru, India
| | - Rajanikant Panda
- Coma Science Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Sensation & Perception Research Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Centre du Cerveau, University Hospital of Liege, Liege, Belgium
| | - Kanishka Sharma
- MILE Lab, Department of Electrical Engineering, Indian Institute of Science, Bengaluru, India
| | - A Adarsh
- MILE Lab, Department of Electrical Engineering, Indian Institute of Science, Bengaluru, India
| | - Jitka Annen
- Coma Science Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Centre du Cerveau, University Hospital of Liege, Liege, Belgium
| | - Charlotte Martial
- Coma Science Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Centre du Cerveau, University Hospital of Liege, Liege, Belgium
| | - Marie-Elisabeth Faymonville
- Sensation & Perception Research Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Arsene Bruny Integrated Oncological Center, University Hospital of Liege, Liege, Belgium
| | - Steven Laureys
- Coma Science Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Centre du Cerveau, University Hospital of Liege, Liege, Belgium
| | | | - Ramakrishnan Angarai Ganesan
- MILE Lab, Department of Electrical Engineering, Indian Institute of Science, Bengaluru, India; Centre for Neuroscience, Indian Institute of Science, Bengaluru, India
| | - Audrey Vanhaudenhuyse
- Sensation & Perception Research Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Algology Interdisciplinary Center, University Hospital of Liege, Liege, Belgium
| | - Olivia Gosseries
- Coma Science Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Sensation & Perception Research Group, GIGA-Consciousness, University of Liege, Liege, Belgium; Centre du Cerveau, University Hospital of Liege, Liege, Belgium.
| |
Collapse
|
2
|
Granato A, Phillips WA, Schulz JM, Suzuki M, Larkum ME. Dysfunctions of cellular context-sensitivity in neurodevelopmental learning disabilities. Neurosci Biobehav Rev 2024; 161:105688. [PMID: 38670298 DOI: 10.1016/j.neubiorev.2024.105688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/17/2024] [Accepted: 04/21/2024] [Indexed: 04/28/2024]
Abstract
Pyramidal neurons have a pivotal role in the cognitive capabilities of neocortex. Though they have been predominantly modeled as integrate-and-fire point processors, many of them have another point of input integration in their apical dendrites that is central to mechanisms endowing them with the sensitivity to context that underlies basic cognitive capabilities. Here we review evidence implicating impairments of those mechanisms in three major neurodevelopmental disabilities, fragile X, Down syndrome, and fetal alcohol spectrum disorders. Multiple dysfunctions of the mechanisms by which pyramidal cells are sensitive to context are found to be implicated in all three syndromes. Further deciphering of these cellular mechanisms would lead to the understanding of and therapies for learning disabilities beyond any that are currently available.
Collapse
Affiliation(s)
- Alberto Granato
- Dept. of Veterinary Sciences. University of Turin, Grugliasco, Turin 10095, Italy.
| | - William A Phillips
- Psychology, Faculty of Natural Sciences, University of Stirling, Scotland FK9 4LA, UK
| | - Jan M Schulz
- Roche Pharma Research & Early Development, Neuroscience & Rare Diseases Discovery, Roche Innovation Center Basel, F. Hoffmann-La Roche Ltd, Grenzacherstrasse 124, Basel 4070, Switzerland
| | - Mototaka Suzuki
- Dept. of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam 1098 XH, the Netherlands
| | - Matthew E Larkum
- Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin 10117, Germany; Institute of Biology, Humboldt University Berlin, Berlin, Germany
| |
Collapse
|
3
|
Luppi AI, Rosas FE, Mediano PAM, Menon DK, Stamatakis EA. Information decomposition and the informational architecture of the brain. Trends Cogn Sci 2024; 28:352-368. [PMID: 38199949 DOI: 10.1016/j.tics.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/09/2023] [Accepted: 11/17/2023] [Indexed: 01/12/2024]
Abstract
To explain how the brain orchestrates information-processing for cognition, we must understand information itself. Importantly, information is not a monolithic entity. Information decomposition techniques provide a way to split information into its constituent elements: unique, redundant, and synergistic information. We review how disentangling synergistic and redundant interactions is redefining our understanding of integrative brain function and its neural organisation. To explain how the brain navigates the trade-offs between redundancy and synergy, we review converging evidence integrating the structural, molecular, and functional underpinnings of synergy and redundancy; their roles in cognition and computation; and how they might arise over evolution and development. Overall, disentangling synergistic and redundant information provides a guiding principle for understanding the informational architecture of the brain and cognition.
Collapse
Affiliation(s)
- Andrea I Luppi
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK; Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Fernando E Rosas
- Department of Informatics, University of Sussex, Brighton, UK; Centre for Psychedelic Research, Department of Brain Sciences, Imperial College London, London, UK; Centre for Complexity Science, Imperial College London, London, UK; Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK
| | - Pedro A M Mediano
- Department of Computing, Imperial College London, London, UK; Department of Psychology, University of Cambridge, Cambridge, UK
| | - David K Menon
- Department of Medicine, University of Cambridge, Cambridge, UK; Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.
| |
Collapse
|
4
|
Das A, Sheffield AG, Nandy AS, Jadi MP. Brain-state mediated modulation of inter-laminar dependencies in visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.02.04.527119. [PMID: 36945492 PMCID: PMC10028746 DOI: 10.1101/2023.02.04.527119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Spatial attention is a quintessential example of adaptive information processing in the brain and is critical for recognizing behaviorally relevant objects in a cluttered environment. Object recognition is mediated by neural encoding along the ventral visual hierarchy. How the deployment of spatial attention aids these hierarchical computations is unclear. Prior studies point to two distinct mechanisms: an improvement in the efficacy of information directed from one encoding stage to another, and/or a suppression of shared information within encoding stages. To test these proposals, it is crucial to estimate the attentional modulation of unique information flow across and shared information within the encoding stages of the visual hierarchy. We investigated this in the multi-stage laminar network of visual area V4, an area strongly modulated by attention. Using network-based dependency estimation from multivariate data, we quantified the modulation of inter-layer information flow during a change detection task and found that deployment of attention indeed strengthened unique dependencies between the input and superficial layers. Using the partial information decomposition framework, we estimated the modulation of shared dependencies and found that they are reduced specifically in the putative excitatory subpopulations within a layer. Surprisingly, we found a strengthening of unique dependencies within the laminar populations, a finding not previously predicted. Crucially, these modulation patterns were also observed during successful behavioral outcomes (hits) that are thought to be mediated by endogenous brain state fluctuations, and not by experimentally imposed attentive states. Finally, phases of endogenous fluctuations that were optimal for 'hits' were associated with reduced neural excitability. A reduction in neural excitability, potentially mediated by diminished shared inputs, suggests a novel mechanism for enhancing unique information transmission during optimal states. By decomposing the modulation of multivariate information, and combined with prior theoretical work, our results suggest common computations of optimal sensory states that are attained by either task demands or endogenous fluctuations.
Collapse
|
5
|
Martínez M. The Information-Processing Perspective on Categorization. Cogn Sci 2024; 48:e13411. [PMID: 38402446 DOI: 10.1111/cogs.13411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 09/01/2023] [Accepted: 12/04/2023] [Indexed: 02/26/2024]
Abstract
Categorization behavior can be fruitfully analyzed in terms of the trade-off between as high as possible faithfulness in the transmission of information about samples of the classes to be categorized, and as low as possible transmission costs for that same information. The kinds of categorization behaviors we associate with conceptual atoms, prototypes, and exemplars emerge naturally as a result of this trade-off, in the presence of certain natural constraints on the probabilistic distribution of samples, and the ways in which we measure faithfulness. Beyond the general structure of categorization in these circumstances, the same information-centered perspective can shed light on other, more concrete properties of human categorization performance, such as the results of certain prominent experiments on supervised categorization.
Collapse
|
6
|
Voges N, Lima V, Hausmann J, Brovelli A, Battaglia D. Decomposing Neural Circuit Function into Information Processing Primitives. J Neurosci 2024; 44:e0157232023. [PMID: 38050070 PMCID: PMC10866194 DOI: 10.1523/jneurosci.0157-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 09/01/2023] [Accepted: 09/19/2023] [Indexed: 12/06/2023] Open
Abstract
It is challenging to measure how specific aspects of coordinated neural dynamics translate into operations of information processing and, ultimately, cognitive functions. An obstacle is that simple circuit mechanisms-such as self-sustained or propagating activity and nonlinear summation of inputs-do not directly give rise to high-level functions. Nevertheless, they already implement simple the information carried by neural activity. Here, we propose that distinct functions, such as stimulus representation, working memory, or selective attention, stem from different combinations and types of low-level manipulations of information or information processing primitives. To test this hypothesis, we combine approaches from information theory with simulations of multi-scale neural circuits involving interacting brain regions that emulate well-defined cognitive functions. Specifically, we track the information dynamics emergent from patterns of neural dynamics, using quantitative metrics to detect where and when information is actively buffered, transferred or nonlinearly merged, as possible modes of low-level processing (storage, transfer and modification). We find that neuronal subsets maintaining representations in working memory or performing attentional gain modulation are signaled by their boosted involvement in operations of information storage or modification, respectively. Thus, information dynamic metrics, beyond detecting which network units participate in cognitive processing, also promise to specify how and when they do it, that is, through which type of primitive computation, a capability that may be exploited for the analysis of experimental recordings.
Collapse
Affiliation(s)
- Nicole Voges
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Vinicius Lima
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
| | - Johannes Hausmann
- R&D Department, Hyland Switzerland Sarl, Corcelles NE 2035, Switzerland
| | - Andrea Brovelli
- Institut de Neurosciences de La Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
| | - Demian Battaglia
- Institute for Language, Communication and the Brain (ILCB), Aix-Marseille Université, Marseille 13005, France
- Institut de Neurosciences des Systèmes (INS), UMR 1106, Aix-Marseille Université, Marseille 13005, France
- University of Strasbourg Institute for Advanced Studies (USIAS), Strasbourg 67000, France
| |
Collapse
|
7
|
Scagliarini T, Sparacino L, Faes L, Marinazzo D, Stramaglia S. Gradients of O-information highlight synergy and redundancy in physiological applications. FRONTIERS IN NETWORK PHYSIOLOGY 2024; 3:1335808. [PMID: 38264338 PMCID: PMC10803408 DOI: 10.3389/fnetp.2023.1335808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 12/21/2023] [Indexed: 01/25/2024]
Abstract
The study of high order dependencies in complex systems has recently led to the introduction of statistical synergy, a novel quantity corresponding to a form of emergence in which patterns at large scales are not traceable from lower scales. As a consequence, several works in the last years dealt with the synergy and its counterpart, the redundancy. In particular, the O-information is a signed metric that measures the balance between redundant and synergistic statistical dependencies. In spite of its growing use, this metric does not provide insight about the role played by low-order scales in the formation of high order effects. To fill this gap, the framework for the computation of the O-information has been recently expanded introducing the so-called gradients of this metric, which measure the irreducible contribution of a variable (or a group of variables) to the high order informational circuits of a system. Here, we review the theory behind the O-information and its gradients and present the potential of these concepts in the field of network physiology, showing two new applications relevant to brain functional connectivity probed via functional resonance imaging and physiological interactions among the variability of heart rate, arterial pressure, respiration and cerebral blood flow.
Collapse
Affiliation(s)
- Tomas Scagliarini
- Dipartimento di Fisica e Astronomia G. Galilei, Università degli Studi di Padova, Padova, Italy
| | - Laura Sparacino
- Dipartimento di Ingegneria, Università di Palermo, Palermo, Italy
| | - Luca Faes
- Dipartimento di Ingegneria, Università di Palermo, Palermo, Italy
| | | | - Sebastiano Stramaglia
- Dipartimento Interateneo di Fisica, Università degli Studi di Bari Aldo Moro, Bari, Italy
- Center of Innovative Technologies for Signal Detection and Processing (TIRES), Università degli Studi di Bari Aldo Moro, Bari, Italy
| |
Collapse
|
8
|
Koçillari L, Celotto M, Francis NA, Mukherjee S, Babadi B, Kanold PO, Panzeri S. Behavioural relevance of redundant and synergistic stimulus information between functionally connected neurons in mouse auditory cortex. Brain Inform 2023; 10:34. [PMID: 38052917 PMCID: PMC10697912 DOI: 10.1186/s40708-023-00212-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 11/02/2023] [Indexed: 12/07/2023] Open
Abstract
Measures of functional connectivity have played a central role in advancing our understanding of how information is transmitted and processed within the brain. Traditionally, these studies have focused on identifying redundant functional connectivity, which involves determining when activity is similar across different sites or neurons. However, recent research has highlighted the importance of also identifying synergistic connectivity-that is, connectivity that gives rise to information not contained in either site or neuron alone. Here, we measured redundant and synergistic functional connectivity between neurons in the mouse primary auditory cortex during a sound discrimination task. Specifically, we measured directed functional connectivity between neurons simultaneously recorded with calcium imaging. We used Granger Causality as a functional connectivity measure. We then used Partial Information Decomposition to quantify the amount of redundant and synergistic information about the presented sound that is carried by functionally connected or functionally unconnected pairs of neurons. We found that functionally connected pairs present proportionally more redundant information and proportionally less synergistic information about sound than unconnected pairs, suggesting that their functional connectivity is primarily redundant. Further, synergy and redundancy coexisted both when mice made correct or incorrect perceptual discriminations. However, redundancy was much higher (both in absolute terms and in proportion to the total information available in neuron pairs) in correct behavioural choices compared to incorrect ones, whereas synergy was higher in absolute terms but lower in relative terms in correct than in incorrect behavioural choices. Moreover, the proportion of redundancy reliably predicted perceptual discriminations, with the proportion of synergy adding no extra predictive power. These results suggest a crucial contribution of redundancy to correct perceptual discriminations, possibly due to the advantage it offers for information propagation, and also suggest a role of synergy in enhancing information level during correct discriminations.
Collapse
Affiliation(s)
- Loren Koçillari
- Istituto Italiano Di Tecnologia, 38068, Rovereto, Italy.
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, 20251, Hamburg, Germany.
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf (UKE), 20246, Hamburg, Germany.
| | - Marco Celotto
- Istituto Italiano Di Tecnologia, 38068, Rovereto, Italy
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, 20251, Hamburg, Germany
- Department of Pharmacy and Biotechnology, University of Bologna, 40126, Bologna, Italy
| | - Nikolas A Francis
- Department of Biology and Brain and Behavior Institute, University of Maryland, College Park, MD, 20742, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA
| | - Behtash Babadi
- Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering and Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, 20251, Hamburg, Germany.
| |
Collapse
|
9
|
Clawson W, Waked B, Madec T, Ghestem A, Quilichini PP, Battaglia D, Bernard C. Perturbed Information Processing Complexity in Experimental Epilepsy. J Neurosci 2023; 43:6573-6587. [PMID: 37550052 PMCID: PMC10513075 DOI: 10.1523/jneurosci.0383-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 07/07/2023] [Accepted: 07/20/2023] [Indexed: 08/09/2023] Open
Abstract
Comorbidities, such as cognitive deficits, which often accompany epilepsies, constitute a basal state, while seizures are rare and transient events. This suggests that neural dynamics, in particular those supporting cognitive function, are altered in a permanent manner in epilepsy. Here, we test the hypothesis that primitive processes of information processing at the core of cognitive function (i.e., storage and sharing of information) are altered in the hippocampus and the entorhinal cortex in experimental epilepsy in adult, male Wistar rats. We find that information storage and sharing are organized into substates across the stereotypic states of slow and theta oscillations in both epilepsy and control conditions. However, their internal composition and organization through time are disrupted in epilepsy, partially losing brain state selectivity compared with controls, and shifting toward a regimen of disorder. We propose that the alteration of information processing at this algorithmic level of computation, the theoretical intermediate level between structure and function, may be a mechanism behind the emergent and widespread comorbidities associated with epilepsy, and perhaps other disorders.SIGNIFICANCE STATEMENT Comorbidities, such as cognitive deficits, which often accompany epilepsies, constitute a basal state, while seizures are rare and transient events. This suggests that neural dynamics, in particular those supporting cognitive function, are altered in a permanent manner in epilepsy. Here, we show that basic processes of information processing at the core of cognitive function (i.e., storage and sharing of information) are altered in the hippocampus and the entorhinal cortex (two regions involved in memory processes) in experimental epilepsy. Such disruption of information processing at the algorithmic level itself could underlie the general performance impairments in epilepsy.
Collapse
Affiliation(s)
- Wesley Clawson
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
- Allen Discovery Center, Tufts University, Medford, Massachusetts
| | - Benjamin Waked
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Tanguy Madec
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Antoine Ghestem
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Pascale P Quilichini
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Demian Battaglia
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
- University of Strasbourg Institute for Advanced Studies, Strasbourg, France
| | - Christophe Bernard
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
10
|
Barnett L, Seth AK. Dynamical independence: Discovering emergent macroscopic processes in complex dynamical systems. Phys Rev E 2023; 108:014304. [PMID: 37583178 DOI: 10.1103/physreve.108.014304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 06/15/2023] [Indexed: 08/17/2023]
Abstract
We introduce a notion of emergence for macroscopic variables associated with highly multivariate microscopic dynamical processes. Dynamical independence instantiates the intuition of an emergent macroscopic process as one possessing the characteristics of a dynamical system "in its own right," with its own dynamical laws distinct from those of the underlying microscopic dynamics. We quantify (departure from) dynamical independence by a transformation-invariant Shannon information-based measure of dynamical dependence. We emphasize the data-driven discovery of dynamically independent macroscopic variables, and introduce the idea of a multiscale "emergence portrait" for complex systems. We show how dynamical dependence may be computed explicitly for linear systems in both time and frequency domains, facilitating discovery of emergent phenomena across spatiotemporal scales, and outline application of the linear operationalization to inference of emergence portraits for neural systems from neurophysiological time-series data. We discuss dynamical independence for discrete- and continuous-time deterministic dynamics, with potential application to Hamiltonian mechanics and classical complex systems such as flocking and cellular automata.
Collapse
Affiliation(s)
- L Barnett
- Sussex Centre for Consciousness Science, Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, United Kingdom
| | - A K Seth
- Sussex Centre for Consciousness Science, Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, United Kingdom
- Canadian Institute for Advanced Research, Program on Brain, Mind, and Consciousness, Toronto, Ontario M5G 1M1, Canada
| |
Collapse
|
11
|
van Enk SJ. Pooling probability distributions and partial information decomposition. Phys Rev E 2023; 107:054133. [PMID: 37329048 DOI: 10.1103/physreve.107.054133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/09/2023] [Indexed: 06/18/2023]
Abstract
Notwithstanding various attempts to construct a partial information decomposition (PID) for multiple variables by defining synergistic, redundant, and unique information, there is no consensus on how one ought to precisely define either of these quantities. One aim here is to illustrate how that ambiguity-or, more positively, freedom of choice-may arise. Using the basic idea that information equals the average reduction in uncertainty when going from an initial to a final probability distribution, synergistic information will likewise be defined as a difference between two entropies. One term is uncontroversial and characterizes "the whole" information that source variables carry jointly about a target variable T. The other term then is meant to characterize the information carried by the "sum of its parts." Here we interpret that concept as needing a suitable probability distribution aggregated ("pooled") from multiple marginal distributions (the parts). Ambiguity arises in the definition of the optimum way to pool two (or more) probability distributions. Independent of the exact definition of optimum pooling, the concept of pooling leads to a lattice that differs from the often-used redundancy-based lattice. One can associate not just a number (an average entropy) with each node of the lattice, but (pooled) probability distributions. As an example, one simple and reasonable approach to pooling is presented, which naturally gives rise to the overlap between different probability distributions as being a crucial quantity that characterizes both synergistic and unique information.
Collapse
Affiliation(s)
- S J van Enk
- Department of Physics, University of Oregon, Eugene, Oregon 97403, USA
| |
Collapse
|
12
|
Jansma A. Higher-Order Interactions and Their Duals Reveal Synergy and Logical Dependence beyond Shannon-Information. ENTROPY (BASEL, SWITZERLAND) 2023; 25:e25040648. [PMID: 37190436 PMCID: PMC10137660 DOI: 10.3390/e25040648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/06/2023] [Accepted: 04/07/2023] [Indexed: 05/17/2023]
Abstract
Information-theoretic quantities reveal dependencies among variables in the structure of joint, marginal, and conditional entropies while leaving certain fundamentally different systems indistinguishable. Furthermore, there is no consensus on the correct higher-order generalisation of mutual information (MI). In this manuscript, we show that a recently proposed model-free definition of higher-order interactions among binary variables (MFIs), such as mutual information, is a Möbius inversion on a Boolean algebra, except of surprisal instead of entropy. This provides an information-theoretic interpretation to the MFIs, and by extension to Ising interactions. We study the objects dual to mutual information and the MFIs on the order-reversed lattices. We find that dual MI is related to the previously studied differential mutual information, while dual interactions are interactions with respect to a different background state. Unlike (dual) mutual information, interactions and their duals uniquely identify all six 2-input logic gates, the dy- and triadic distributions, and different causal dynamics that are identical in terms of their Shannon information content.
Collapse
Affiliation(s)
- Abel Jansma
- MRC Human Genetics Unit, Institute of Genetics & Cancer, University of Edinburgh, Edinburgh EH8 9YL, UK
- Higgs Centre for Theoretical Physics, School of Physics & Astronomy, University of Edinburgh, Edinburgh EH8 9YL, UK
- Biomedical AI Lab, School of Informatics, University of Edinburgh, Edinburgh EH8 9YL, UK
| |
Collapse
|
13
|
Niizato T, Murakami H, Musha T. Functional duality in group criticality via ambiguous interactions. PLoS Comput Biol 2023; 19:e1010869. [PMID: 36791061 PMCID: PMC9931117 DOI: 10.1371/journal.pcbi.1010869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/10/2023] [Indexed: 02/16/2023] Open
Abstract
Critical phenomena are wildly observed in living systems. If the system is at criticality, it can quickly transfer information and achieve optimal response to external stimuli. Especially, animal collective behavior has numerous critical properties, which are related to other research regions, such as the brain system. Although the critical phenomena influencing collective behavior have been extensively studied, two important aspects require clarification. First, these critical phenomena never occur on a single scale but are instead nested from the micro- to macro-levels (e.g., from a Lévy walk to scale-free correlation). Second, the functional role of group criticality is unclear. To elucidate these aspects, the ambiguous interaction model is constructed in this study; this model has a common framework and is a natural extension of previous representative models (such as the Boids and Vicsek models). We demonstrate that our model can explain the nested criticality of collective behavior across several scales (considering scale-free correlation, super diffusion, Lévy walks, and 1/f fluctuation for relative velocities). Our model can also explain the relationship between scale-free correlation and group turns. To examine this relation, we propose a new method, applying partial information decomposition (PID) to two scale-free induced subgroups. Using PID, we construct information flows between two scale-free induced subgroups and find that coupling of the group morphology (i.e., the velocity distributions) and its fluctuation power (i.e., the fluctuation distributions) likely enable rapid group turning. Thus, the flock morphology may help its internal fluctuation convert to dynamic behavior. Our result sheds new light on the role of group morphology, which is relatively unheeded, retaining the importance of fluctuation dynamics in group criticality.
Collapse
Affiliation(s)
- Takayuki Niizato
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Ibaraki, Japan
- * E-mail:
| | - Hisashi Murakami
- Faculty of Information and Human Science, Kyoto Institute of Technology, Sakyo-ku, Kyoto city, Kyoto, Japan
| | - Takuya Musha
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
14
|
He Z, Toyoizumi T. Progressive Interpretation Synthesis: Interpreting Task Solving by Quantifying Previously Used and Unused Information. Neural Comput 2022; 35:38-57. [PMID: 36417587 DOI: 10.1162/neco_a_01542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 08/10/2022] [Indexed: 11/25/2022]
Abstract
A deep neural network is a good task solver, but it is difficult to make sense of its operation. People have different ideas about how to interpret its operation. We look at this problem from a new perspective where the interpretation of task solving is synthesized by quantifying how much and what previously unused information is exploited in addition to the information used to solve previous tasks. First, after learning several tasks, the network acquires several information partitions related to each task. We propose that the network then learns the minimal information partition that supplements previously learned information partitions to more accurately represent the input. This extra partition is associated with unconceptualized information that has not been used in previous tasks. We manage to identify what unconceptualized information is used and quantify the amount. To interpret how the network solves a new task, we quantify as meta-information how much information from each partition is extracted. We implement this framework with the variational information bottleneck technique. We test the framework with the MNIST and the CLEVR data set. The framework is shown to be able to compose information partitions and synthesize experience-dependent interpretation in the form of meta-information. This system progressively improves the resolution of interpretation upon new experience by converting a part of the unconceptualized information partition to a task-related partition. It can also provide a visual interpretation by imaging what is the part of previously unconceptualized information that is needed to solve a new task.
Collapse
Affiliation(s)
- Zhengqi He
- Lab for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- Lab for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama 351-0198, Japan.,Department of Mathematical Informatics, Graduate School of Information Science and Technology, the University of Tokyo, Tokyo 113-8656, Japan
| |
Collapse
|
15
|
Shine JM. Adaptively navigating affordance landscapes: How interactions between the superior colliculus and thalamus coordinate complex, adaptive behaviour. Neurosci Biobehav Rev 2022; 143:104921. [DOI: 10.1016/j.neubiorev.2022.104921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/08/2022] [Accepted: 09/08/2022] [Indexed: 11/06/2022]
|
16
|
High-order functional redundancy in ageing explained via alterations in the connectome in a whole-brain model. PLoS Comput Biol 2022; 18:e1010431. [PMID: 36054198 PMCID: PMC9477425 DOI: 10.1371/journal.pcbi.1010431] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 09/15/2022] [Accepted: 07/23/2022] [Indexed: 12/02/2022] Open
Abstract
The human brain generates a rich repertoire of spatio-temporal activity patterns, which support a wide variety of motor and cognitive functions. These patterns of activity change with age in a multi-factorial manner. One of these factors is the variations in the brain’s connectomics that occurs along the lifespan. However, the precise relationship between high-order functional interactions and connnectomics, as well as their variations with age are largely unknown, in part due to the absence of mechanistic models that can efficiently map brain connnectomics to functional connectivity in aging. To investigate this issue, we have built a neurobiologically-realistic whole-brain computational model using both anatomical and functional MRI data from 161 participants ranging from 10 to 80 years old. We show that the differences in high-order functional interactions between age groups can be largely explained by variations in the connectome. Based on this finding, we propose a simple neurodegeneration model that is representative of normal physiological aging. As such, when applied to connectomes of young participant it reproduces the age-variations that occur in the high-order structure of the functional data. Overall, these results begin to disentangle the mechanisms by which structural changes in the connectome lead to functional differences in the ageing brain. Our model can also serve as a starting point for modeling more complex forms of pathological ageing or cognitive deficits. Modern neuroimaging techniques allow us to study how the human brain’s anatomical architecture (a.k.a. structural connectome) changes under different conditions or interventions. Recently, using functional neuroimaging data, we have shown that complex patterns of interactions between brain areas change along the lifespan, exhibiting increased redundant interactions in the older population. However, the mechanisms that underlie these functional differences are still unclear. Here, we extended this work and hypothesized that the variations of functional patterns can be explained by the dynamics of the brain’s anatomical networks, which are known to degenerate as we age. To test this hypothesis, we implemented a whole-brain model of neuronal activity, where different brain regions are anatomically wired using real connectomes from 161 participants with ages ranging from 10 to 80 years old. Analyzing different functional aspects of brain activity when varying the empirical connectomes, we show that the increased redundancy found in the older group can indeed be explained by precise rules affecting anatomical connectivity, thus emphasizing the critical role that the brain connectome plays for shaping complex functional interactions and the efficiency in the global communication of the human brain.
Collapse
|
17
|
Kim SH, Woo J, Choi K, Choi M, Han K. Neural Information Processing and Computations of Two-Input Synapses. Neural Comput 2022; 34:2102-2131. [PMID: 36027799 DOI: 10.1162/neco_a_01534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
Information processing in artificial neural networks is largely dependent on the nature of neuron models. While commonly used models are designed for linear integration of synaptic inputs, accumulating experimental evidence suggests that biological neurons are capable of nonlinear computations for many converging synaptic inputs via homo- and heterosynaptic mechanisms. This nonlinear neuronal computation may play an important role in complex information processing at the neural circuit level. Here we characterize the dynamics and coding properties of neuron models on synaptic transmissions delivered from two hidden states. The neuronal information processing is influenced by the cooperative and competitive interactions among synapses and the coherence of the hidden states. Furthermore, we demonstrate that neuronal information processing under two-input synaptic transmission can be mapped to linearly nonseparable XOR as well as basic AND/OR operations. In particular, the mixtures of linear and nonlinear neuron models outperform the fashion-MNIST test compared to the neural networks consisting of only one type. This study provides a computational framework for assessing information processing of neuron and synapse models that may be beneficial for the design of brain-inspired artificial intelligence algorithms and neuromorphic systems.
Collapse
Affiliation(s)
- Soon Ho Kim
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| | - Junhyuk Woo
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| | - Kiri Choi
- School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, South Korea
| | - MooYoung Choi
- Department of Physics and Astronomy and Center for Theoretical Physics, Seoul National University, Seoul 08826, South Korea
| | - Kyungreem Han
- Laboratory of Computational Neurophysics, Convergence Research Center for Brain Science, Brain Science Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea
| |
Collapse
|
18
|
Kay JW, Schulz JM, Phillips WA. A Comparison of Partial Information Decompositions Using Data from Real and Simulated Layer 5b Pyramidal Cells. ENTROPY 2022; 24:e24081021. [PMID: 35893001 PMCID: PMC9394329 DOI: 10.3390/e24081021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/18/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023]
Abstract
Partial information decomposition allows the joint mutual information between an output and a set of inputs to be divided into components that are synergistic or shared or unique to each input. We consider five different decompositions and compare their results using data from layer 5b pyramidal cells in two different studies. The first study was on the amplification of somatic action potential output by apical dendritic input and its regulation by dendritic inhibition. We find that two of the decompositions produce much larger estimates of synergy and shared information than the others, as well as large levels of unique misinformation. When within-neuron differences in the components are examined, the five methods produce more similar results for all but the shared information component, for which two methods produce a different statistical conclusion from the others. There are some differences in the expression of unique information asymmetry among the methods. It is significantly larger, on average, under dendritic inhibition. Three of the methods support a previous conclusion that apical amplification is reduced by dendritic inhibition. The second study used a detailed compartmental model to produce action potentials for many combinations of the numbers of basal and apical synaptic inputs. Decompositions of the entire data set produce similar differences to those in the first study. Two analyses of decompositions are conducted on subsets of the data. In the first, the decompositions reveal a bifurcation in unique information asymmetry. For three of the methods, this suggests that apical drive switches to basal drive as the strength of the basal input increases, while the other two show changing mixtures of information and misinformation. Decompositions produced using the second set of subsets show that all five decompositions provide support for properties of cooperative context-sensitivity—to varying extents.
Collapse
Affiliation(s)
- Jim W. Kay
- School of Mathematics and Statistics, University of Glasgow, Glasgow G12 8QQ, UK
- Correspondence:
| | - Jan M. Schulz
- Department of Biomedicine, University of Basel, 4001 Basel, Switzerland;
| | | |
Collapse
|
19
|
Mediano PAM, Rosas FE, Luppi AI, Jensen HJ, Seth AK, Barrett AB, Carhart-Harris RL, Bor D. Greater than the parts: a review of the information decomposition approach to causal emergence. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20210246. [PMID: 35599558 PMCID: PMC9125226 DOI: 10.1098/rsta.2021.0246] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/07/2022] [Indexed: 05/28/2023]
Abstract
Emergence is a profound subject that straddles many scientific disciplines, including the formation of galaxies and how consciousness arises from the collective activity of neurons. Despite the broad interest that exists on this concept, the study of emergence has suffered from a lack of formalisms that could be used to guide discussions and advance theories. Here, we summarize, elaborate on, and extend a recent formal theory of causal emergence based on information decomposition, which is quantifiable and amenable to empirical testing. This theory relates emergence with information about a system's temporal evolution that cannot be obtained from the parts of the system separately. This article provides an accessible but rigorous introduction to the framework, discussing the merits of the approach in various scenarios of interest. We also discuss several interpretation issues and potential misunderstandings, while highlighting the distinctive benefits of this formalism. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- Pedro A M Mediano
- Department of Psychology, University of Cambridge, Cambridge, UK
- Department of Psychology, Queen Mary University of London, London, UK
| | - Fernando E Rosas
- Centre for Psychedelic Research, Imperial College London, London, UK
- Data Science Institute, Imperial College London, London, UK
- Centre for Complexity Science, Imperial College London, London, UK
| | - Andrea I Luppi
- University Division of Anaesthesia, University of Cambridge, Cambridge, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
- The Alan Turing Institute, London, UK
| | - Henrik J Jensen
- Centre for Complexity Science, Imperial College London, London, UK
- Department of Mathematics, Imperial College London, London, UK
- Institute of Innovative Research, Tokyo Institute of Technology Tokyo, Japan
| | - Anil K Seth
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
- CIFAR Program on Brain, Mind, and Consciousness, Toronto, Canada
| | - Adam B Barrett
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
- The Data Intensive Science Centre, Department of Informatics, University of Sussex, Brighton, UK
| | - Robin L Carhart-Harris
- Centre for Psychedelic Research, Imperial College London, London, UK
- Psychedelics Division, Neuroscape, Department of Neurology, University of California, San Francisco, CA, USA
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge, UK
- Department of Psychology, Queen Mary University of London, London, UK
| |
Collapse
|
20
|
Newman EL, Varley TF, Parakkattu VK, Sherrill SP, Beggs JM. Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition. ENTROPY 2022; 24:e24070930. [PMID: 35885153 PMCID: PMC9319160 DOI: 10.3390/e24070930] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022]
Abstract
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.
Collapse
Affiliation(s)
- Ehren L. Newman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
- Correspondence: (E.L.N.); (T.F.V.)
| | - Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
- Correspondence: (E.L.N.); (T.F.V.)
| | - Vibin K. Parakkattu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
| | | | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, USA;
| |
Collapse
|
21
|
Mercier MR, Dubarry AS, Tadel F, Avanzini P, Axmacher N, Cellier D, Vecchio MD, Hamilton LS, Hermes D, Kahana MJ, Knight RT, Llorens A, Megevand P, Melloni L, Miller KJ, Piai V, Puce A, Ramsey NF, Schwiedrzik CM, Smith SE, Stolk A, Swann NC, Vansteensel MJ, Voytek B, Wang L, Lachaux JP, Oostenveld R. Advances in human intracranial electroencephalography research, guidelines and good practices. Neuroimage 2022; 260:119438. [PMID: 35792291 DOI: 10.1016/j.neuroimage.2022.119438] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/23/2022] [Accepted: 06/30/2022] [Indexed: 12/11/2022] Open
Abstract
Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.
Collapse
|
22
|
Combrisson E, Allegra M, Basanisi R, Ince RAA, Giordano B, Bastin J, Brovelli A. Group-level inference of information-based measures for the analyses of cognitive brain networks from neurophysiological data. Neuroimage 2022; 258:119347. [PMID: 35660460 DOI: 10.1016/j.neuroimage.2022.119347] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 05/24/2022] [Accepted: 05/30/2022] [Indexed: 12/30/2022] Open
Abstract
The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.
Collapse
Affiliation(s)
- Etienne Combrisson
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| | - Michele Allegra
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France; Dipartimento di Fisica e Astronomia "Galileo Galilei", Università di Padova, via Marzolo 8, 35131 Padova, Italy; Padua Neuroscience Center, Università di Padova, via Orus 2, 35131 Padova, Italy
| | - Ruggero Basanisi
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Bruno Giordano
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Julien Bastin
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| |
Collapse
|
23
|
Luppi AI, Mediano PAM, Rosas FE, Holland N, Fryer TD, O'Brien JT, Rowe JB, Menon DK, Bor D, Stamatakis EA. A synergistic core for human brain evolution and cognition. Nat Neurosci 2022; 25:771-782. [PMID: 35618951 DOI: 10.1038/s41593-022-01070-0] [Citation(s) in RCA: 55] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 03/30/2022] [Indexed: 12/11/2022]
Abstract
How does the organization of neural information processing enable humans' sophisticated cognition? Here we decompose functional interactions between brain regions into synergistic and redundant components, revealing their distinct information-processing roles. Combining functional and structural neuroimaging with meta-analytic results, we demonstrate that redundant interactions are predominantly associated with structurally coupled, modular sensorimotor processing. Synergistic interactions instead support integrative processes and complex cognition across higher-order brain networks. The human brain leverages synergistic information to a greater extent than nonhuman primates, with high-synergy association cortices exhibiting the highest degree of evolutionary cortical expansion. Synaptic density mapping from positron emission tomography and convergent molecular and metabolic evidence demonstrate that synergistic interactions are supported by receptor diversity and human-accelerated genes underpinning synaptic function. This information-resolved approach provides analytic tools to disentangle information integration from coupling, enabling richer, more accurate interpretations of functional connectivity, and illuminating how the human neurocognitive architecture navigates the trade-off between robustness and integration.
Collapse
Affiliation(s)
- Andrea I Luppi
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge, UK. .,Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK. .,Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK. .,The Alan Turing Institute, London, UK.
| | - Pedro A M Mediano
- Department of Psychology, University of Cambridge, Cambridge, UK.,Department of Psychology, Queen Mary University of London, London, UK
| | - Fernando E Rosas
- Center for Psychedelic Research, Department of Brain Science, Imperial College London, London, UK.,Data Science Institute, Imperial College London, London, UK.,Center for Complexity Science, Imperial College London, London, UK
| | - Negin Holland
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Tim D Fryer
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.,Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - John T O'Brien
- Department of Psychiatry, University of Cambridge, Cambridge, UK.,Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - James B Rowe
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.,Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK.,MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - David K Menon
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge, UK.,Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge, UK.,Department of Psychology, Queen Mary University of London, London, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge, UK.,Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
24
|
Quantifying Reinforcement-Learning Agent’s Autonomy, Reliance on Memory and Internalisation of the Environment. ENTROPY 2022; 24:e24030401. [PMID: 35327912 PMCID: PMC8947692 DOI: 10.3390/e24030401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 03/08/2022] [Accepted: 03/09/2022] [Indexed: 11/16/2022]
Abstract
Intuitively, the level of autonomy of an agent is related to the degree to which the agent’s goals and behaviour are decoupled from the immediate control by the environment. Here, we capitalise on a recent information-theoretic formulation of autonomy and introduce an algorithm for calculating autonomy in a limiting process of time step approaching infinity. We tackle the question of how the autonomy level of an agent changes during training. In particular, in this work, we use the partial information decomposition (PID) framework to monitor the levels of autonomy and environment internalisation of reinforcement-learning (RL) agents. We performed experiments on two environments: a grid world, in which the agent has to collect food, and a repeating-pattern environment, in which the agent has to learn to imitate a sequence of actions by memorising the sequence. PID also allows us to answer how much the agent relies on its internal memory (versus how much it relies on the observations) when transitioning to its next internal state. The experiments show that specific terms of PID strongly correlate with the obtained reward and with the agent’s behaviour against perturbations in the observations.
Collapse
|
25
|
A Novel Approach to the Partial Information Decomposition. ENTROPY 2022; 24:e24030403. [PMID: 35327914 PMCID: PMC8947370 DOI: 10.3390/e24030403] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/22/2022] [Accepted: 02/23/2022] [Indexed: 11/16/2022]
Abstract
We consider the “partial information decomposition” (PID) problem, which aims to decompose the information that a set of source random variables provide about a target random variable into separate redundant, synergistic, union, and unique components. In the first part of this paper, we propose a general framework for constructing a multivariate PID. Our framework is defined in terms of a formal analogy with intersection and union from set theory, along with an ordering relation which specifies when one information source is more informative than another. Our definitions are algebraically and axiomatically motivated, and can be generalized to domains beyond Shannon information theory (such as algorithmic information theory and quantum information theory). In the second part of this paper, we use our general framework to define a PID in terms of the well-known Blackwell order, which has a fundamental operational interpretation. We demonstrate our approach on numerous examples and show that it overcomes many drawbacks associated with previous proposals.
Collapse
|
26
|
Aguilera M, Douchamps V, Battaglia D, Goutagny R. How Many Gammas? Redefining Hippocampal Theta-Gamma Dynamic During Spatial Learning. Front Behav Neurosci 2022; 16:811278. [PMID: 35177972 PMCID: PMC8843838 DOI: 10.3389/fnbeh.2022.811278] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 01/03/2022] [Indexed: 01/09/2023] Open
Abstract
The hippocampal formation is one of the brain systems in which the functional roles of coordinated oscillations in information representation and communication are better studied. Within this circuit, neuronal oscillations are conceived as a mechanism to precisely coordinate upstream and downstream neuronal ensembles, underlying dynamic exchange of information. Within a global reference framework provided by theta (θ) oscillations, different gamma-frequency (γ) carriers would temporally segregate information originating from different sources, thereby allowing networks to disambiguate convergent inputs. Two γ sub-bands were thus defined according to their frequency (slow γ, 30–80 Hz; medium γ, 60–120 Hz) and differential power distribution across CA1 dendritic layers. According to this prevalent model, layer-specific γ oscillations in CA1 would reliably identify the temporal dynamics of afferent inputs and may therefore aid in identifying specific memory processes (encoding for medium γ vs. retrieval for slow γ). However, this influential view, derived from time-averages of either specific γ sub-bands or different projection methods, might not capture the complexity of CA1 θ-γ interactions. Recent studies investigating γ oscillations at the θ cycle timescale have revealed a more dynamic and diverse landscape of θ-γ motifs, with many θ cycles containing multiple γ bouts of various frequencies. To properly capture the hippocampal oscillatory complexity, we have argued in this review that we should consider the entirety of the data and its multidimensional complexity. This will call for a revision of the actual model and will require the use of new tools allowing the description of individual γ bouts in their full complexity.
Collapse
Affiliation(s)
- Matthieu Aguilera
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
| | - Vincent Douchamps
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, CNRS, Aix-Marseille Université, Marseille, France
- University of Strasbourg Institute for Advanced Study (USIAS), Strasbourg, France
| | - Romain Goutagny
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
- *Correspondence: Romain Goutagny,
| |
Collapse
|
27
|
Luppi AI, Mediano PAM, Rosas FE, Harrison DJ, Carhart-Harris RL, Bor D, Stamatakis EA. What it is like to be a bit: an integrated information decomposition account of emergent mental phenomena. Neurosci Conscious 2021; 2021:niab027. [PMID: 34804593 PMCID: PMC8600547 DOI: 10.1093/nc/niab027] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/24/2021] [Accepted: 08/12/2021] [Indexed: 01/08/2023] Open
Abstract
A central question in neuroscience concerns the relationship between consciousness and its physical substrate. Here, we argue that a richer characterization of consciousness can be obtained by viewing it as constituted of distinct information-theoretic elements. In other words, we propose a shift from quantification of consciousness-viewed as integrated information-to its decomposition. Through this approach, termed Integrated Information Decomposition (ΦID), we lay out a formal argument that whether the consciousness of a given system is an emergent phenomenon depends on its information-theoretic composition-providing a principled answer to the long-standing dispute on the relationship between consciousness and emergence. Furthermore, we show that two organisms may attain the same amount of integrated information, yet differ in their information-theoretic composition. Building on ΦID's revised understanding of integrated information, termed ΦR, we also introduce the notion of ΦR-ing ratio to quantify how efficiently an entity uses information for conscious processing. A combination of ΦR and ΦR-ing ratio may provide an important way to compare the neural basis of different aspects of consciousness. Decomposition of consciousness enables us to identify qualitatively different 'modes of consciousness', establishing a common space for mapping the phenomenology of different conscious states. We outline both theoretical and empirical avenues to carry out such mapping between phenomenology and information-theoretic modes, starting from a central feature of everyday consciousness: selfhood. Overall, ΦID yields rich new ways to explore the relationship between information, consciousness, and its emergence from neural dynamics.
Collapse
Affiliation(s)
- Andrea I Luppi
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0QQ, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0QQ, UK
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge CB2 1SB, UK
| | - Pedro A M Mediano
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| | - Fernando E Rosas
- Center for Psychedelic Research, Department of Brain Science, Imperial College London, London W12 0NN, UK
- Data Science Institute, Imperial College London, London SW7 2AZ, UK
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
| | - David J Harrison
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge CB2 1SB, UK
- Department of History and Philosophy of Science, University of Cambridge, Cambridge CB2 3RH, UK
| | - Robin L Carhart-Harris
- Center for Psychedelic Research, Department of Brain Science, Imperial College London, London W12 0NN, UK
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0QQ, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0QQ, UK
| |
Collapse
|
28
|
Sherrill SP, Timme NM, Beggs JM, Newman EL. Partial information decomposition reveals that synergistic neural integration is greater downstream of recurrent information flow in organotypic cortical cultures. PLoS Comput Biol 2021; 17:e1009196. [PMID: 34252081 PMCID: PMC8297941 DOI: 10.1371/journal.pcbi.1009196] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 07/22/2021] [Accepted: 06/18/2021] [Indexed: 11/22/2022] Open
Abstract
The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration–a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration–determined from partial information decomposition–varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow. Networks compute information. That is, they modify inputs to generate distinct outputs. These computations are an important component of network information processing. Knowing how the routing of information in a network influences computation is therefore crucial. Here we asked how a key form of computation—synergistic integration—is related to the direction of local information flow in networks of spiking cortical neurons. Specifically, we asked how information flow between input neurons (i.e., recurrent information flow) and information flow from output neurons to input neurons (i.e., feedback information flow) was related to the amount of synergistic integration performed by output neurons. We found that greater synergistic integration occurred where there was more recurrent information flow. And, lesser synergistic integration occurred where there was more feedback information flow relative to feedforward information flow. These results show that computation, in the form of synergistic integration, is distinctly influenced by the directionality of local information flow. Such work is valuable for predicting where and how network computation occurs and for designing networks with desired computational abilities.
Collapse
Affiliation(s)
- Samantha P. Sherrill
- Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
- * E-mail: (SPS); (ELN)
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States of America
| | - John M. Beggs
- Department of Physics & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
- * E-mail: (SPS); (ELN)
| |
Collapse
|
29
|
Gutknecht AJ, Wibral M, Makkeh A. Bits and pieces: understanding information decomposition from part-whole relationships and formal logic. Proc Math Phys Eng Sci 2021; 477:20210110. [PMID: 35197799 PMCID: PMC8261229 DOI: 10.1098/rspa.2021.0110] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022] Open
Abstract
Partial information decomposition (PID) seeks to decompose the multivariate mutual information that a set of source variables contains about a target variable into basic pieces, the so-called ‘atoms of information’. Each atom describes a distinct way in which the sources may contain information about the target. For instance, some information may be contained uniquely in a particular source, some information may be shared by multiple sources and some information may only become accessible synergistically if multiple sources are combined. In this paper, we show that the entire theory of PID can be derived, firstly, from considerations of part-whole relationships between information atoms and mutual information terms, and secondly, based on a hierarchy of logical constraints describing how a given information atom can be accessed. In this way, the idea of a PID is developed on the basis of two of the most elementary relationships in nature: the part-whole relationship and the relation of logical implication. This unifying perspective provides insights into pressing questions in the field such as the possibility of constructing a PID based on concepts other than redundant information in the general n-sources case. Additionally, it admits of a particularly accessible exposition of PID theory.
Collapse
Affiliation(s)
- A J Gutknecht
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Goettingen, Germany.,MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
| | - M Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Goettingen, Germany
| | - A Makkeh
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Goettingen, Germany
| |
Collapse
|
30
|
Gatica M, Cofré R, Mediano PAM, Rosas FE, Orio P, Diez I, Swinnen SP, Cortes JM. High-Order Interdependencies in the Aging Brain. Brain Connect 2021; 11:734-744. [PMID: 33858199 DOI: 10.1089/brain.2020.0982] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
Background: Brain interdependencies can be studied from either a structural/anatomical perspective ("structural connectivity") or by considering statistical interdependencies ("functional connectivity" [FC]). Interestingly, while structural connectivity is by definition pairwise (white-matter fibers project from one region to another), FC is not. However, most FC analyses only focus on pairwise statistics and they neglect higher order interactions. A promising tool to study high-order interdependencies is the recently proposed O-Information, which can quantify the intrinsic statistical synergy and the redundancy in groups of three or more interacting variables. Methods: We analyzed functional magnetic resonance imaging (fMRI) data obtained at rest from 164 healthy subjects with ages ranging in 10 to 80 years and used O-Information to investigate how high-order statistical interdependencies are affected by age. Results: Older participants (from 60 to 80 years old) exhibited a higher predominance of redundant dependencies compared with younger participants, an effect that seems to be pervasive as it is evident for all orders of interaction. In addition, while there is strong heterogeneity across brain regions, we found a "redundancy core" constituted by the prefrontal and motor cortices in which redundancy was evident at all the interaction orders studied. Discussion: High-order interdependencies in fMRI data reveal a dominant redundancy in functions such as working memory, executive, and motor functions. Our methodology can be used for a broad range of applications, and the corresponding code is freely available.
Collapse
Affiliation(s)
- Marilyn Gatica
- Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile.,Biomedical Research Doctorate Program, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Rodrigo Cofré
- CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso, Chile
| | - Pedro A M Mediano
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Fernando E Rosas
- Centre for Psychedelic Research, Department of Brain Science, Imperial College London, London, United Kingdom.,Data Science Institute, Imperial College London, London, United Kingdom.,Centre for Complexity Science, Imperial College London, London, United Kingdom
| | - Patricio Orio
- Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso, Chile.,Instituto de Neurociencia, Facultad de Ciencias, Universidad de Valparaíso, Valparaíso, Chile
| | - Ibai Diez
- Department of Radiology, Gordon Center for Medical Imaging, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts, USA.,Neurology Department, Harvard Medical School, Boston, Massachusetts, USA.,Neurotechnology Laboratory, Tecnalia Health Department, Derio, Spain
| | - Stephan P Swinnen
- Research Center for Movement Control and Neuroplasticity, Department of Movement Sciences, KU Leuven, Leuven, Belgium.,Leuven Brain Institute (LBI), KU Leuven, Leuven, Belgium
| | - Jesus M Cortes
- Computational Neuroimaging Lab, Biocruces-Bizkaia Health Research Institute, Barakaldo, Spain.,IKERBASQUE: The Basque Foundation for Science, Bilbao, Spain.,Department of Cell Biology and Histology, University of the Basque Country, Leioa, Spain
| |
Collapse
|
31
|
Makkeh A, Gutknecht AJ, Wibral M. Introducing a differentiable measure of pointwise shared information. Phys Rev E 2021; 103:032149. [PMID: 33862718 DOI: 10.1103/physreve.103.032149] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 02/19/2021] [Indexed: 11/07/2022]
Abstract
Partial information decomposition of the multivariate mutual information describes the distinct ways in which a set of source variables contains information about a target variable. The groundbreaking work of Williams and Beer has shown that this decomposition cannot be determined from classic information theory without making additional assumptions, and several candidate measures have been proposed, often drawing on principles from related fields such as decision theory. None of these measures is differentiable with respect to the underlying probability mass function. We here present a measure that satisfies this property, emerges solely from information-theoretic principles, and has the form of a local mutual information. We show how the measure can be understood from the perspective of exclusions of probability mass, a principle that is foundational to the original definition of mutual information by Fano. Since our measure is well defined for individual realizations of random variables it lends itself, for example, to local learning in artificial neural networks. We also show that it has a meaningful Möbius inversion on a redundancy lattice and obeys a target chain rule. We give an operational interpretation of the measure based on the decisions that an agent should take if given only the shared information.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| | - Aaron J Gutknecht
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August Univeristy, Goettingen, Germany
| |
Collapse
|
32
|
Pedreschi N, Bernard C, Clawson W, Quilichini P, Barrat A, Battaglia D. Dynamic core-periphery structure of information sharing networks in entorhinal cortex and hippocampus. Netw Neurosci 2021; 4:946-975. [PMID: 33615098 PMCID: PMC7888487 DOI: 10.1162/netn_a_00142] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 04/16/2020] [Indexed: 02/01/2023] Open
Abstract
Neural computation is associated with the emergence, reconfiguration, and dissolution of cell assemblies in the context of varying oscillatory states. Here, we describe the complex spatiotemporal dynamics of cell assemblies through temporal network formalism. We use a sliding window approach to extract sequences of networks of information sharing among single units in hippocampus and entorhinal cortex during anesthesia and study how global and node-wise functional connectivity properties evolve through time and as a function of changing global brain state (theta vs. slow-wave oscillations). First, we find that information sharing networks display, at any time, a core-periphery structure in which an integrated core of more tightly functionally interconnected units links to more loosely connected network leaves. However the units participating to the core or to the periphery substantially change across time windows, with units entering and leaving the core in a smooth way. Second, we find that discrete network states can be defined on top of this continuously ongoing liquid core-periphery reorganization. Switching between network states results in a more abrupt modification of the units belonging to the core and is only loosely linked to transitions between global oscillatory states. Third, we characterize different styles of temporal connectivity that cells can exhibit within each state of the sharing network. While inhibitory cells tend to be central, we show that, otherwise, anatomical localization only poorly influences the patterns of temporal connectivity of the different cells. Furthermore, cells can change temporal connectivity style when the network changes state. Altogether, these findings reveal that the sharing of information mediated by the intrinsic dynamics of hippocampal and entorhinal cortex cell assemblies have a rich spatiotemporal structure, which could not have been identified by more conventional time- or state-averaged analyses of functional connectivity. It is generally thought that computations performed by local brain circuits rely on complex neural processes, associated with the flexible waxing and waning of cell assemblies, that is, an ensemble of cells firing in tight synchrony. Although cell assembly formation is inherently and unavoidably dynamical, it is still common to find studies in which essentially “static” approaches are used to characterize this process. In the present study, we adopt instead a temporal network approach. Avoiding usual time-averaging procedures, we reveal that hub neurons are not hardwired but that cells vary smoothly their degree of integration within the assembly core. Furthermore, our temporal network framework enables the definition of alternative possible styles of “hubness.” Some cells may share information with a multitude of other units but only in an intermittent manner, as “activists” in a flash mob. In contrast, some other cells may share information in a steadier manner, as resolute “lobbyists.” Finally, by avoiding averages over preimposed states, we show that within each global oscillatory state rich switching dynamics can take place between a repertoire of many available network states. We thus show that the temporal network framework provides a natural and effective language to rigorously describe the rich spatiotemporal patterns of information sharing instantiated by cell assembly evolution.
Collapse
Affiliation(s)
- Nicola Pedreschi
- Aix-Marseille University, Université de Toulon, CNRS, CPT, Turing Center for Living Systems, Marseille, France
| | - Christophe Bernard
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Wesley Clawson
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Pascale Quilichini
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Alain Barrat
- Aix-Marseille University, Université de Toulon, CNRS, CPT, Turing Center for Living Systems, Marseille, France
| | - Demian Battaglia
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
33
|
Saetia S, Yoshimura N, Koike Y. Constructing Brain Connectivity Model Using Causal Network Reconstruction Approach. Front Neuroinform 2021; 15:619557. [PMID: 33679363 PMCID: PMC7930222 DOI: 10.3389/fninf.2021.619557] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/21/2021] [Indexed: 11/23/2022] Open
Abstract
Studying brain function is a challenging task. In the past, we could only study brain anatomical structures post-mortem, or infer brain functions from clinical data of patients with a brain injury. Nowadays technology, such as functional magnetic resonance imaging (fMRI), enable non-invasive brain activity observation. Several approaches have been proposed to interpret brain activity data. The brain connectivity model is a graphical tool that represents the interaction between brain regions, during certain states. It depicts how a brain region cause changes to other parts of the brain, which can be implied as information flow. This model can be used to help interpret how the brain works. There are several mathematical frameworks that can be used to infer the connectivity model from brain activity signals. Granger causality is one such approach and is one of the first that has been applied to brain activity data. However, due to the concept of the framework, such as the use of pairwise correlation, combined with the limitation of brain activity data such as low temporal resolution in case of fMRI signal, makes the interpretation of the connectivity difficult. We therefore propose the application of the Tigramite causal discovery framework on fMRI data. The Tigramite framework uses measures such as causal effect to analyze causal relations in the system. This enables the framework to identify both direct and indirect pathways or connectivities. In this paper, we applied the framework to the Human Connectome Project motor task-fMRI dataset. We then present the results and discuss how the framework improves interpretability of the connectivity model. We hope that this framework will help us understand more complex brain functions such as memory, consciousness, or the resting-state of the brain, in the future.
Collapse
Affiliation(s)
- Supat Saetia
- Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Japan
| | - Natsue Yoshimura
- Precursory Research for Embryonic Science and Technology (PRESTO), Japan Science and Technology Agency (JST), Kawaguchi, Japan
| | - Yasuharu Koike
- Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Japan
| |
Collapse
|
34
|
Kunert-Graf J, Sakhanenko N, Galas D. Partial Information Decomposition and the Information Delta: A Geometric Unification Disentangling Non-Pairwise Information. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1333. [PMID: 33266517 PMCID: PMC7760044 DOI: 10.3390/e22121333] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 11/12/2020] [Accepted: 11/19/2020] [Indexed: 01/01/2023]
Abstract
Information theory provides robust measures of multivariable interdependence, but classically does little to characterize the multivariable relationships it detects. The Partial Information Decomposition (PID) characterizes the mutual information between variables by decomposing it into unique, redundant, and synergistic components. This has been usefully applied, particularly in neuroscience, but there is currently no generally accepted method for its computation. Independently, the Information Delta framework characterizes non-pairwise dependencies in genetic datasets. This framework has developed an intuitive geometric interpretation for how discrete functions encode information, but lacks some important generalizations. This paper shows that the PID and Delta frameworks are largely equivalent. We equate their key expressions, allowing for results in one framework to apply towards open questions in the other. For example, we find that the approach of Bertschinger et al. is useful for the open Information Delta question of how to deal with linkage disequilibrium. We also show how PID solutions can be mapped onto the space of delta measures. Using Bertschinger et al. as an example solution, we identify a specific plane in delta-space on which this approach's optimization is constrained, and compute it for all possible three-variable discrete functions of a three-letter alphabet. This yields a clear geometric picture of how a given solution decomposes information.
Collapse
Affiliation(s)
- James Kunert-Graf
- Pacific Northwest Research Institute, Seattle, WA 98122, USA; (N.S.); (D.G.)
| | | | | |
Collapse
|
35
|
Candadai M, Izquierdo EJ. Sources of predictive information in dynamical neural networks. Sci Rep 2020; 10:16901. [PMID: 33037274 PMCID: PMC7547683 DOI: 10.1038/s41598-020-73380-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2020] [Accepted: 09/07/2020] [Indexed: 11/28/2022] Open
Abstract
Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent's internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes.
Collapse
Affiliation(s)
- Madhavun Candadai
- Cognitive Science program, Indiana University, Bloomington, IN, USA
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
| | - Eduardo J Izquierdo
- Cognitive Science program, Indiana University, Bloomington, IN, USA.
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
36
|
Aru J, Siclari F, Phillips WA, Storm JF. Apical drive-A cellular mechanism of dreaming? Neurosci Biobehav Rev 2020; 119:440-455. [PMID: 33002561 DOI: 10.1016/j.neubiorev.2020.09.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 09/08/2020] [Accepted: 09/13/2020] [Indexed: 11/17/2022]
Abstract
Dreams are internally generated experiences that occur independently of current sensory input. Here we argue, based on cortical anatomy and function, that dream experiences are tightly related to the workings of a specific part of cortical pyramidal neurons, the apical integration zone (AIZ). The AIZ receives and processes contextual information from diverse sources and could constitute a major switch point for transitioning from externally to internally generated experiences such as dreams. We propose that during dreams the output of certain pyramidal neurons is mainly driven by input into the AIZ. We call this mode of functioning "apical drive". Our hypothesis is based on the evidence that the cholinergic and adrenergic arousal systems, which show different dynamics between waking, slow wave sleep, and rapid eye movement sleep, have specific effects on the AIZ. We suggest that apical drive may also contribute to waking experiences, such as mental imagery. Future studies, investigating the different modes of apical function and their regulation during sleep and wakefulness are likely to be richly rewarded.
Collapse
Affiliation(s)
- Jaan Aru
- Institute of Computer Science, University of Tartu, Estonia; Institute of Biology, Humboldt University Berlin, Germany.
| | - Francesca Siclari
- Center for Investigation and Research on Sleep, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland; Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland; Faculty of Natural Sciences, Psychology, University of Stirling, Stirling, United Kingdom.
| | - William A Phillips
- Faculty of Natural Sciences, Psychology, University of Stirling, Stirling, United Kingdom.
| | - Johan F Storm
- Brain Signalling Group, Section for Physiology, Faculty of Medicine, Domus Medica, University of Oslo, PB 1104 Blindern, 0317 Oslo, Norway.
| |
Collapse
|
37
|
Cramer B, Stöckel D, Kreft M, Wibral M, Schemmel J, Meier K, Priesemann V. Control of criticality and computation in spiking neuromorphic networks with plasticity. Nat Commun 2020; 11:2853. [PMID: 32503982 PMCID: PMC7275091 DOI: 10.1038/s41467-020-16548-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 04/23/2020] [Indexed: 11/08/2022] Open
Abstract
The critical state is assumed to be optimal for any computation in recurrent neural networks, because criticality maximizes a number of abstract computational properties. We challenge this assumption by evaluating the performance of a spiking recurrent neural network on a set of tasks of varying complexity at - and away from critical network dynamics. To that end, we developed a plastic spiking network on a neuromorphic chip. We show that the distance to criticality can be easily adapted by changing the input strength, and then demonstrate a clear relation between criticality, task-performance and information-theoretic fingerprint. Whereas the information-theoretic measures all show that network capacity is maximal at criticality, only the complex tasks profit from criticality, whereas simple tasks suffer. Thereby, we challenge the general assumption that criticality would be beneficial for any task, and provide instead an understanding of how the collective network state should be tuned to task requirement.
Collapse
Affiliation(s)
- Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany.
| | - David Stöckel
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Markus Kreft
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg-August University, Hermann-Rein-Straße 3, 37075, Göttingen, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Im Neuenheimer Feld 227, 69120, Heidelberg, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Georg-August University, Am Faßberg 17, 37077, Göttingen, Germany.
- Department of Physics, Georg-August University, Friedrich-Hund-Platz 1, 37077, Göttingen, Germany.
| |
Collapse
|
38
|
Adeel A. Conscious Multisensory Integration: Introducing a Universal Contextual Field in Biological and Deep Artificial Neural Networks. Front Comput Neurosci 2020; 14:15. [PMID: 32508610 PMCID: PMC7248356 DOI: 10.3389/fncom.2020.00015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 02/07/2020] [Indexed: 11/24/2022] Open
Abstract
Conscious awareness plays a major role in human cognition and adaptive behavior, though its function in multisensory integration is not yet fully understood, hence, questions remain: How does the brain integrate the incoming multisensory signals with respect to different external environments? How are the roles of these multisensory signals defined to adhere to the anticipated behavioral-constraint of the environment? This work seeks to articulate a novel theory on conscious multisensory integration (CMI) that addresses the aforementioned research challenges. Specifically, the well-established contextual field (CF) in pyramidal cells and coherent infomax theory (Kay et al., 1998; Kay and Phillips, 2011) is split into two functionally distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time) and UCF defines the outside environment and anticipated behavior (based on past learning and reasoning). Both LCF and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CAN), which adapts to changing environments. The proposed theory is evaluated using human contextual audio-visual (AV) speech modeling. Simulation results provide new insights into contextual modulation and selective multisensory information amplification/suppression. The central hypothesis reviewed here suggests that the pyramidal cell, in addition to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. The UCF (as a steering force or tuner) plays a decisive role in precisely selecting whether to amplify/suppress the transmission of relevant/irrelevant feedforward signals, without changing the content e.g., which information is worth paying more attention to? This, as opposed to, unconditional excitatory and inhibitory activity in existing deep neural networks (DNNs), is called conditional amplification/suppression.
Collapse
Affiliation(s)
- Ahsan Adeel
- Oxford Computational Neuroscience, Nuffield Department of Surgical Sciences, John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom
- School of Mathematics and Computer Science, University of Wolverhampton, Wolverhampton, United Kingdom
| |
Collapse
|
39
|
Abstract
Neural systems are composed of many local processors that generate an output given their many inputs as specified by a transfer function. This paper studies a transfer function that is fundamentally asymmetric and builds on multi-site intracellular recordings indicating that some neocortical pyramidal cells can function as context-sensitive two-point processors in which some inputs modulate the strength with which they transmit information about other inputs. Learning and processing at the level of the local processor can then be guided by the context of activity in the system as a whole without corrupting the message that the local processor transmits. We use a recent advance in the foundations of information theory to compare the properties of this modulatory transfer function with that of the simple arithmetic operators. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e., synergy. We show that contextual modulation is fundamentally asymmetric, contrasts with all four simple arithmetic operators, can take various forms, and can occur together with the anatomical asymmetry that defines pyramidal neurons in mammalian neocortex.
Collapse
|
40
|
Brun-Usan M, Thies C, Watson RA. How to fit in: The learning principles of cell differentiation. PLoS Comput Biol 2020; 16:e1006811. [PMID: 32282832 PMCID: PMC7179933 DOI: 10.1371/journal.pcbi.1006811] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 04/23/2020] [Accepted: 02/20/2020] [Indexed: 11/18/2022] Open
Abstract
Cell differentiation in multicellular organisms requires cells to respond to complex combinations of extracellular cues, such as morphogen concentrations. Some models of phenotypic plasticity conceptualise the response as a relatively simple function of a single environmental cues (e.g. a linear function of one cue), which facilitates rigorous analysis. Conversely, more mechanistic models such those implementing GRNs allows for a more general class of response functions but makes analysis more difficult. Therefore, a general theory describing how cells integrate multi-dimensional signals is lacking. In this work, we propose a theoretical framework for understanding the relationships between environmental cues (inputs) and phenotypic responses (outputs) underlying cell plasticity. We describe the relationship between environment and cell phenotype using logical functions, making the evolution of cell plasticity equivalent to a simple categorisation learning task. This abstraction allows us to apply principles derived from learning theory to understand the evolution of multi-dimensional plasticity. Our results show that natural selection is capable of discovering adaptive forms of cell plasticity associated with complex logical functions. However, developmental dynamics cause simpler functions to evolve more readily than complex ones. By using conceptual tools derived from learning theory we show that this developmental bias can be interpreted as a learning bias in the acquisition of plasticity functions. Because of that bias, the evolution of plasticity enables cells, under some circumstances, to display appropriate plastic responses to environmental conditions that they have not experienced in their evolutionary past. This is possible when the selective environment mirrors the bias of the developmental dynamics favouring the acquisition of simple plasticity functions–an example of the necessary conditions for generalisation in learning systems. These results illustrate the functional parallelisms between learning in neural networks and the action of natural selection on environmentally sensitive gene regulatory networks. This offers a theoretical framework for the evolution of plastic responses that integrate information from multiple cues, a phenomenon that underpins the evolution of multicellularity and developmental robustness. In organisms composed of many cell types, the differentiation of cells relies on their ability to respond to complex extracellular cues, such as morphogen concentrations, a phenomenon known as cell plasticity. Although cell plasticity plays a crucial role in development and evolution, it is not clear how, and if, cell plasticity can enhance adaptation to a novel environment and/or facilitate robust developmental processes. In some models, the relationships between the environmental cues (inputs) and the phenotypic responses (outputs) are conceptualised as one-to-one (i.e. simple ‘reaction norms’); whereas the phenotype of plastic cells commonly depends on several simultaneous inputs (i.e. many-to-one, multi-dimensional reaction norms). One alternative is the use of a gene-regulatory network (GRN) models that allow for much more general responses; but this can make analysis difficult. In this work we use a theoretical framework based on logical functions and learning theory to characterize such multi-dimensional reaction norms produced by GRNs. This allows us to reveal a strong and previously unnoticed bias towards the acquisition of simple forms of cell plasticity, which increases their ability to adapt to novel environments. Recognising this bias helps us to understand when the evolution of cell plasticity will increase the ability of plastic cells to adapt to novel environments, to respond appropriately to complex extracellular cues and to enhance developmental robustness. Since this set of properties are required for the evolution of multicellularity, our approach can also contribute to our understanding of this evolutionary transition.
Collapse
Affiliation(s)
- Miguel Brun-Usan
- Institute for Life Sciences/Electronics and Computer Sciences, University of Southampton, Southampton, (United Kingdom)
| | - Christoph Thies
- Institute for Life Sciences/Electronics and Computer Sciences, University of Southampton, Southampton, (United Kingdom)
| | - Richard A. Watson
- Institute for Life Sciences/Electronics and Computer Sciences, University of Southampton, Southampton, (United Kingdom)
- * E-mail:
| |
Collapse
|
41
|
Finn C, Lizier JT. Generalised Measures of Multivariate Information Content. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E216. [PMID: 33285991 PMCID: PMC7851747 DOI: 10.3390/e22020216] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 02/05/2020] [Accepted: 02/12/2020] [Indexed: 12/12/2022]
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content.
Collapse
Affiliation(s)
- Conor Finn
- Centre for Complex Systems, The University of Sydney, Sydney NSW 2006, Australia;
- CSIRO Data61, Marsfield NSW 2122, Australia
| | - Joseph T. Lizier
- Centre for Complex Systems, The University of Sydney, Sydney NSW 2006, Australia;
| |
Collapse
|
42
|
Granada AE, Jiménez A, Stewart-Ornstein J, Blüthgen N, Reber S, Jambhekar A, Lahav G. The effects of proliferation status and cell cycle phase on the responses of single cells to chemotherapy. Mol Biol Cell 2020; 31:845-857. [PMID: 32049575 PMCID: PMC7185964 DOI: 10.1091/mbc.e19-09-0515] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
DNA-damaging chemotherapeutics are widely used in cancer treatments, but for solid tumors they often leave a residual tumor-cell population. Here we investigated how cellular states might affect the response of individual cells in a clonal population to cisplatin, a DNA-damaging chemotherapeutic agent. Using a live-cell reporter of cell cycle phase and long-term imaging, we monitored single-cell proliferation before, at the time of, and after treatment. We found that in response to cisplatin, cells either arrested or died, and the ratio of these outcomes depended on the dose. While we found that the cell cycle phase at the time of cisplatin addition was not predictive of outcome, the proliferative history of the cell was: highly proliferative cells were more likely to arrest than to die, whereas slowly proliferating cells showed a higher probability of death. Information theory analysis revealed that the dose of cisplatin had the greatest influence on the cells’ decisions to arrest or die, and that the proliferation status interacted with the cisplatin dose to further guide this decision. These results show an unexpected effect of proliferation status in regulating responses to cisplatin and suggest that slowly proliferating cells within tumors may be acutely vulnerable to chemotherapy.
Collapse
Affiliation(s)
- Adrián E Granada
- IRI Life Sciences, Humboldt University Berlin, 10115 Berlin, Germany.,Department of Systems Biology, Harvard Medical School, Boston, MA 02115
| | - Alba Jiménez
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115
| | - Jacob Stewart-Ornstein
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115.,Department of Computational and Systems Biology, University of Pittsburgh Medical School, Pittsburgh, PA 15260
| | - Nils Blüthgen
- IRI Life Sciences, Humboldt University Berlin, 10115 Berlin, Germany.,Institute of Pathology, Charité Universitätsmedizin Berlin, 10117 Berlin, Germany.,German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), 69120 -Heidelberg, Germany.,Berlin Institute of Health (BIH), 10178 Berlin, Germany
| | - Simone Reber
- IRI Life Sciences, Humboldt University Berlin, 10115 Berlin, Germany.,University of Applied Sciences Berlin, 13353 Berlin, Germany
| | - Ashwini Jambhekar
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115
| | - Galit Lahav
- Department of Systems Biology, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
43
|
Makkeh A, Chicharro D, Theis DO, Vicente R. MAXENT3D_PID: An Estimator for the Maximum-Entropy Trivariate Partial Information Decomposition. ENTROPY 2019; 21:862. [PMCID: PMC7515392 DOI: 10.3390/e21090862] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 08/27/2019] [Indexed: 07/04/2023]
Abstract
Partial information decomposition (PID) separates the contributions of sources about a target into unique, redundant, and synergistic components of information. In essence, PID answers the question of “who knows what” of a system of random variables and hence has applications to a wide spectrum of fields ranging from social to biological sciences. The paper presents MaxEnt3D_Pid, an algorithm that computes the PID of three sources, based on a recently-proposed maximum entropy measure, using convex optimization (cone programming). We describe the algorithm and its associated software utilization and report the results of various experiments assessing its accuracy. Moreover, the paper shows that a hierarchy of bivariate and trivariate PID allows obtaining the finer quantities of the trivariate partial information measure.
Collapse
Affiliation(s)
- Abdullah Makkeh
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Daniel Chicharro
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, 38068 Rovereto (TN), Italy
| | - Dirk Oliver Theis
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| | - Raul Vicente
- Institute of Computer Science, University of Tartu, 51014 Tartu, Estonia
| |
Collapse
|
44
|
Rosas FE, Mediano PAM, Gastpar M, Jensen HJ. Quantifying high-order interdependencies via multivariate extensions of the mutual information. Phys Rev E 2019; 100:032305. [PMID: 31640038 DOI: 10.1103/physreve.100.032305] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Indexed: 04/30/2023]
Abstract
This paper introduces a model-agnostic approach to study statistical synergy, a form of emergence in which patterns at large scales are not traceable from lower scales. Our framework leverages various multivariate extensions of Shannon's mutual information, and introduces the O-information as a metric that is capable of characterizing synergy- and redundancy-dominated systems. The O-information is a symmetric quantity, and can assess intrinsic properties of a system without dividing its parts into "predictors" and "targets." We develop key analytical properties of the O-information, and study how it relates to other metrics of high-order interactions from the statistical mechanics and neuroscience literature. Finally, as a proof of concept, we present an exploration on the relevance of statistical synergy in Baroque music scores.
Collapse
Affiliation(s)
- Fernando E Rosas
- Centre of Complexity Science and Department of Mathematics, Imperial College London, London SW7 2AZ, England, United Kingdom
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, England, United Kingdom
| | - Pedro A M Mediano
- Department of Computing, Imperial College London, London SW7 2AZ, England, United Kingdom
| | - Michael Gastpar
- School of Computer and Communication Sciences, École polytechnique fédérale de Lausanne (EPFL), Lausanne 1015, Switzerland
| | - Henrik J Jensen
- Centre of Complexity Science and Department of Mathematics, Imperial College London, London SW7 2AZ, England, United Kingdom
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8502, Japan
| |
Collapse
|
45
|
Biswas A. Multivariate information processing characterizes fitness of a cascaded gene-transcription machinery. CHAOS (WOODBURY, N.Y.) 2019; 29:063108. [PMID: 31266314 DOI: 10.1063/1.5092447] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 05/24/2019] [Indexed: 06/09/2023]
Abstract
We report that a genetic two-step activation cascade processes diverse flavors of information, e.g., synergy, redundancy, and unique information. Our computations measuring reduction in Shannon entropies and reduction in variances produce differently behaving absolute magnitudes of these informational flavors. We find that similarity can be brought in if these terms are evaluated in fractions with respect to corresponding total information. Each of the input signal and final gene-product is found to generate common or redundant information fractions (mostly) to predict each other, whereas they also complement one another to harness synergistic information fraction, predicting the intermediate biochemical species. For an optimally growing signal to maintain fixed steady-state abundance of activated downstream gene-products, the interaction information fractions for this cascade module shift from net-redundancy to information-independence.
Collapse
Affiliation(s)
- Ayan Biswas
- Department of Chemistry, Bose Institute, 93/1 A P C Road, Kolkata 700 009, India
| |
Collapse
|
46
|
He B, Astolfi L, Valdés-Sosa PA, Marinazzo D, Palva SO, Bénar CG, Michel CM, Koenig T. Electrophysiological Brain Connectivity: Theory and Implementation. IEEE Trans Biomed Eng 2019; 66:10.1109/TBME.2019.2913928. [PMID: 31071012 PMCID: PMC6834897 DOI: 10.1109/tbme.2019.2913928] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
We review the theory and algorithms of electrophysiological brain connectivity analysis. This tutorial is aimed at providing an introduction to brain functional connectivity from electrophysiological signals, including electroencephalography (EEG), magnetoencephalography (MEG), electrocorticography (ECoG), stereoelectroencephalography (SEEG). Various connectivity estimators are discussed, and algorithms introduced. Important issues for estimating and mapping brain functional connectivity with electrophysiology are discussed.
Collapse
Affiliation(s)
- Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
| | - Laura Astolfi
- Department of Computer, Control and Management Engineering, University of Rome Sapienza, and with IRCCS Fondazione Santa Lucia, Rome, Italy
| | | | | | | | | | | | | |
Collapse
|
47
|
Faber SP, Timme NM, Beggs JM, Newman EL. Computation is concentrated in rich clubs of local cortical networks. Netw Neurosci 2019; 3:384-404. [PMID: 30793088 PMCID: PMC6370472 DOI: 10.1162/netn_a_00069] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 08/30/2018] [Indexed: 01/08/2023] Open
Abstract
To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation.
Collapse
Affiliation(s)
- Samantha P. Faber
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M. Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
48
|
Faber SP, Timme NM, Beggs JM, Newman EL. Computation is concentrated in rich clubs of local cortical networks. Netw Neurosci 2019. [PMID: 30793088 DOI: 10.1101/290981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2023] Open
Abstract
To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation.
Collapse
Affiliation(s)
- Samantha P Faber
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L Newman
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
49
|
Affiliation(s)
- WA Phillips
- Faculty of Natural Sciences, University of Stirling, Stirling, UK
| |
Collapse
|
50
|
Biswas A, Banik SK. Interplay of synergy and redundancy in diamond motif. CHAOS (WOODBURY, N.Y.) 2018; 28:103102. [PMID: 30384656 DOI: 10.1063/1.5044606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 09/13/2018] [Indexed: 06/08/2023]
Abstract
The formalism of partial information decomposition provides a number of independent components which altogether constitute the total information provided by the source variable(s) about the target variable(s). These non-overlapping terms are recognized as unique information, synergistic information, and redundant information. The metric of net synergy conceived as the difference between synergistic and redundant information is capable of detecting effective synergy, effective redundancy, and information independence among stochastic variables. The net synergy can be quantified using appropriate combinations of different Shannon mutual information terms. The utilization of the net synergy in network motifs with the nodes representing different biochemical species, involved in information sharing, uncovers rich store for exciting results. In the current study, we use this formalism to obtain a comprehensive understanding of the relative information processing mechanism in a diamond motif and two of its sub-motifs, namely, bifurcation and integration motif embedded within the diamond motif. The emerging patterns of effective synergy and effective redundancy and their contribution toward ensuring high fidelity information transmission are duly compared in the sub-motifs. Investigation on the metric of net synergy in independent bifurcation and integration motifs are also executed. In all of these computations, the crucial roles played by various systemic time scales, activation coefficients, and signal integration mechanisms at the output of the network topologies are especially emphasized. Following this plan of action, we become confident that the origin of effective synergy and effective redundancy can be architecturally justified by decomposing a diamond motif into bifurcation and integration motif. According to our conjecture, the presence of a common source of fluctuations creates effective redundancy. Our calculations reveal that effective redundancy empowers signal fidelity. Moreover, to achieve this, input signaling species avoids strong interaction with downstream intermediates. This strategy is capable of making the diamond motif noise-tolerant. Apart from the topological features, our study also puts forward the active contribution of additive and multiplicative signal integration mechanisms to nurture effective redundancy and effective synergy.
Collapse
Affiliation(s)
- Ayan Biswas
- Department of Chemistry, Bose Institute, 93/1 A P C Road, Kolkata 700 009, India
| | - Suman K Banik
- Department of Chemistry, Bose Institute, 93/1 A P C Road, Kolkata 700 009, India
| |
Collapse
|