1
|
Tian C, Shamai (Shitz) S. Broadcast Channel Cooperative Gain: An Operational Interpretation of Partial Information Decomposition. ENTROPY (BASEL, SWITZERLAND) 2025; 27:310. [PMID: 40149234 PMCID: PMC11941614 DOI: 10.3390/e27030310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2025] [Revised: 03/07/2025] [Accepted: 03/12/2025] [Indexed: 03/29/2025]
Abstract
Partial information decomposition has recently found applications in biological signal processing and machine learning. Despite its impacts, the decomposition was introduced through an informal and heuristic route, and its exact operational meaning is unclear. In this work, we fill this gap by connecting partial information decomposition to the capacity of the broadcast channel, which has been well studied in the information theory literature. We show that the synergistic information in the decomposition can be rigorously interpreted as the cooperative gain, or a lower bound of this gain, on the corresponding broadcast channel. This interpretation can help practitioners to better explain and expand the applications of the partial information decomposition technique.
Collapse
Affiliation(s)
- Chao Tian
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77840, USA
| | - Shlomo Shamai (Shitz)
- Department of Electrical Engineering, Technion Institute of Technology of Israel, Haifa 3200003, Israel;
| |
Collapse
|
2
|
Varley TF, Havert D, Fosque L, Alipour A, Weerawongphrom N, Naganobori H, O’Shea L, Pope M, Beggs J. The serotonergic psychedelic N,N-dipropyltryptamine alters information-processing dynamics in in vitro cortical neural circuits. Netw Neurosci 2024; 8:1421-1438. [PMID: 39735490 PMCID: PMC11674936 DOI: 10.1162/netn_a_00408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 07/08/2024] [Indexed: 12/31/2024] Open
Abstract
Most of the recent work in psychedelic neuroscience has been done using noninvasive neuroimaging, with data recorded from the brains of adult volunteers under the influence of a variety of drugs. While these data provide holistic insights into the effects of psychedelics on whole-brain dynamics, the effects of psychedelics on the mesoscale dynamics of neuronal circuits remain much less explored. Here, we report the effects of the serotonergic psychedelic N,N-diproptyltryptamine (DPT) on information-processing dynamics in a sample of in vitro organotypic cultures of cortical tissue from postnatal rats. Three hours of spontaneous activity were recorded: an hour of predrug control, an hour of exposure to 10-μM DPT solution, and a final hour of washout, once again under control conditions. We found that DPT reversibly alters information dynamics in multiple ways: First, the DPT condition was associated with a higher entropy of spontaneous firing activity and reduced the amount of time information was stored in individual neurons. Second, DPT also reduced the reversibility of neural activity, increasing the entropy produced and suggesting a drive away from equilibrium. Third, DPT altered the structure of neuronal circuits, decreasing the overall information flow coming into each neuron, but increasing the number of weak connections, creating a dynamic that combines elements of integration and disintegration. Finally, DPT decreased the higher order statistical synergy present in sets of three neurons. Collectively, these results paint a complex picture of how psychedelics regulate information processing in mesoscale neuronal networks in cortical tissue. Implications for existing hypotheses of psychedelic action, such as the entropic brain hypothesis, are discussed.
Collapse
Affiliation(s)
- Thomas F. Varley
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
- Vermont Complex Systems Center, University of Vermont, Burlington, VT, USA
| | - Daniel Havert
- Department of Physics, Indiana University, Bloomington, IN, USA
| | - Leandro Fosque
- Department of Physics, Indiana University, Bloomington, IN, USA
| | - Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | | | | | | | - Maria Pope
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - John Beggs
- Department of Physics, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| |
Collapse
|
3
|
Varley TF. A Synergistic Perspective on Multivariate Computation and Causality in Complex Systems. ENTROPY (BASEL, SWITZERLAND) 2024; 26:883. [PMID: 39451959 PMCID: PMC11507062 DOI: 10.3390/e26100883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 10/17/2024] [Accepted: 10/18/2024] [Indexed: 10/26/2024]
Abstract
What does it mean for a complex system to "compute" or perform "computations"? Intuitively, we can understand complex "computation" as occurring when a system's state is a function of multiple inputs (potentially including its own past state). Here, we discuss how computational processes in complex systems can be generally studied using the concept of statistical synergy, which is information about an output that can only be learned when the joint state of all inputs is known. Building on prior work, we show that this approach naturally leads to a link between multivariate information theory and topics in causal inference, specifically, the phenomenon of causal colliders. We begin by showing how Berkson's paradox implies a higher-order, synergistic interaction between multidimensional inputs and outputs. We then discuss how causal structure learning can refine and orient analyses of synergies in empirical data, and when empirical synergies meaningfully reflect computation versus when they may be spurious. We end by proposing that this conceptual link between synergy, causal colliders, and computation can serve as a foundation on which to build a mathematically rich general theory of computation in complex systems.
Collapse
Affiliation(s)
- Thomas F Varley
- Vermont Complex Systems Center, University of Vermont, Burlington, VT 05405, USA
| |
Collapse
|
4
|
Bardella G, Giuffrida V, Giarrocco F, Brunamonti E, Pani P, Ferraina S. Response inhibition in premotor cortex corresponds to a complex reshuffle of the mesoscopic information network. Netw Neurosci 2024; 8:597-622. [PMID: 38952814 PMCID: PMC11168728 DOI: 10.1162/netn_a_00365] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 01/18/2024] [Indexed: 07/03/2024] Open
Abstract
Recent studies have explored functional and effective neural networks in animal models; however, the dynamics of information propagation among functional modules under cognitive control remain largely unknown. Here, we addressed the issue using transfer entropy and graph theory methods on mesoscopic neural activities recorded in the dorsal premotor cortex of rhesus monkeys. We focused our study on the decision time of a Stop-signal task, looking for patterns in the network configuration that could influence motor plan maturation when the Stop signal is provided. When comparing trials with successful inhibition to those with generated movement, the nodes of the network resulted organized into four clusters, hierarchically arranged, and distinctly involved in information transfer. Interestingly, the hierarchies and the strength of information transmission between clusters varied throughout the task, distinguishing between generated movements and canceled ones and corresponding to measurable levels of network complexity. Our results suggest a putative mechanism for motor inhibition in premotor cortex: a topological reshuffle of the information exchanged among ensembles of neurons.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Valentina Giuffrida
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Franco Giarrocco
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Emiliano Brunamonti
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
5
|
Donner C, Bartram J, Hornauer P, Kim T, Roqueiro D, Hierlemann A, Obozinski G, Schröter M. Ensemble learning and ground-truth validation of synaptic connectivity inferred from spike trains. PLoS Comput Biol 2024; 20:e1011964. [PMID: 38683881 PMCID: PMC11081509 DOI: 10.1371/journal.pcbi.1011964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 05/09/2024] [Accepted: 03/02/2024] [Indexed: 05/02/2024] Open
Abstract
Probing the architecture of neuronal circuits and the principles that underlie their functional organization remains an important challenge of modern neurosciences. This holds true, in particular, for the inference of neuronal connectivity from large-scale extracellular recordings. Despite the popularity of this approach and a number of elaborate methods to reconstruct networks, the degree to which synaptic connections can be reconstructed from spike-train recordings alone remains controversial. Here, we provide a framework to probe and compare connectivity inference algorithms, using a combination of synthetic ground-truth and in vitro data sets, where the connectivity labels were obtained from simultaneous high-density microelectrode array (HD-MEA) and patch-clamp recordings. We find that reconstruction performance critically depends on the regularity of the recorded spontaneous activity, i.e., their dynamical regime, the type of connectivity, and the amount of available spike-train data. We therefore introduce an ensemble artificial neural network (eANN) to improve connectivity inference. We train the eANN on the validated outputs of six established inference algorithms and show how it improves network reconstruction accuracy and robustness. Overall, the eANN demonstrated strong performance across different dynamical regimes, worked well on smaller datasets, and improved the detection of synaptic connectivity, especially inhibitory connections. Results indicated that the eANN also improved the topological characterization of neuronal networks. The presented methodology contributes to advancing the performance of inference algorithms and facilitates our understanding of how neuronal activity relates to synaptic connectivity.
Collapse
Affiliation(s)
- Christian Donner
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Julian Bartram
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Philipp Hornauer
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Taehoon Kim
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Damian Roqueiro
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Andreas Hierlemann
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| | - Guillaume Obozinski
- Swiss Data Science Center, ETH Zürich & EPFL, Zürich & Lausanne, Switzerland
| | - Manuel Schröter
- Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland
| |
Collapse
|
6
|
Wollstadt P, Rathbun DL, Usrey WM, Bastos AM, Lindner M, Priesemann V, Wibral M. Information-theoretic analyses of neural data to minimize the effect of researchers' assumptions in predictive coding studies. PLoS Comput Biol 2023; 19:e1011567. [PMID: 37976328 PMCID: PMC10703417 DOI: 10.1371/journal.pcbi.1011567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 12/07/2023] [Accepted: 10/02/2023] [Indexed: 11/19/2023] Open
Abstract
Studies investigating neural information processing often implicitly ask both, which processing strategy out of several alternatives is used and how this strategy is implemented in neural dynamics. A prime example are studies on predictive coding. These often ask whether confirmed predictions about inputs or prediction errors between internal predictions and inputs are passed on in a hierarchical neural system-while at the same time looking for the neural correlates of coding for errors and predictions. If we do not know exactly what a neural system predicts at any given moment, this results in a circular analysis-as has been criticized correctly. To circumvent such circular analysis, we propose to express information processing strategies (such as predictive coding) by local information-theoretic quantities, such that they can be estimated directly from neural data. We demonstrate our approach by investigating two opposing accounts of predictive coding-like processing strategies, where we quantify the building blocks of predictive coding, namely predictability of inputs and transfer of information, by local active information storage and local transfer entropy. We define testable hypotheses on the relationship of both quantities, allowing us to identify which of the assumed strategies was used. We demonstrate our approach on spiking data collected from the retinogeniculate synapse of the cat (N = 16). Applying our local information dynamics framework, we are able to show that the synapse codes for predictable rather than surprising input. To support our findings, we estimate quantities applied in the partial information decomposition framework, which allow to differentiate whether the transferred information is primarily bottom-up sensory input or information transferred conditionally on the current state of the synapse. Supporting our local information-theoretic results, we find that the synapse preferentially transfers bottom-up information.
Collapse
Affiliation(s)
- Patricia Wollstadt
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt/Main, Germany
| | - Daniel L. Rathbun
- Center for Neuroscience, University of California, Davis, California, United States of America
- Center for Ophthalmology, University of Tübingen, Tübingen, Germany
| | - W. Martin Usrey
- Center for Neuroscience, University of California, Davis, California, United States of America
- Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California, United States of America
| | - André Moraes Bastos
- Department of Psychology and Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, United States of America
| | - Michael Lindner
- Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen, Germany
| |
Collapse
|
7
|
Matsuda K, Shirakami A, Nakajima R, Akutsu T, Shimono M. Whole-Brain Evaluation of Cortical Microconnectomes. eNeuro 2023; 10:ENEURO.0094-23.2023. [PMID: 37903612 PMCID: PMC10616907 DOI: 10.1523/eneuro.0094-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 09/08/2023] [Accepted: 09/30/2023] [Indexed: 11/01/2023] Open
Abstract
The brain is an organ that functions as a network of many elements connected in a nonuniform manner. In the brain, the neocortex is evolutionarily newest and is thought to be primarily responsible for the high intelligence of mammals. In the mature mammalian brain, all cortical regions are expected to have some degree of homology, but have some variations of local circuits to achieve specific functions performed by individual regions. However, few cellular-level studies have examined how the networks within different cortical regions differ. This study aimed to find rules for systematic changes of connectivity (microconnectomes) across 16 different cortical region groups. We also observed unknown trends in basic parameters in vitro such as firing rate and layer thickness across brain regions. Results revealed that the frontal group shows unique characteristics such as dense active neurons, thick cortex, and strong connections with deeper layers. This suggests the frontal side of the cortex is inherently capable of driving, even in isolation and that frontal nodes provide the driving force generating a global pattern of spontaneous synchronous activity, such as the default mode network. This finding provides a new hypothesis explaining why disruption in the frontal region causes a large impact on mental health.
Collapse
Affiliation(s)
- Kouki Matsuda
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Arata Shirakami
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Ryota Nakajima
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Tatsuya Akutsu
- Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan
| | - Masanori Shimono
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita-shi, Osaka 565-0871
| |
Collapse
|
8
|
Varley TF, Pope M, Faskowitz J, Sporns O. Multivariate information theory uncovers synergistic subsystems of the human cerebral cortex. Commun Biol 2023; 6:451. [PMID: 37095282 PMCID: PMC10125999 DOI: 10.1038/s42003-023-04843-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 04/14/2023] [Indexed: 04/26/2023] Open
Abstract
One of the most well-established tools for modeling the brain is the functional connectivity network, which is constructed from pairs of interacting brain regions. While powerful, the network model is limited by the restriction that only pairwise dependencies are considered and potentially higher-order structures are missed. Here, we explore how multivariate information theory reveals higher-order dependencies in the human brain. We begin with a mathematical analysis of the O-information, showing analytically and numerically how it is related to previously established information theoretic measures of complexity. We then apply the O-information to brain data, showing that synergistic subsystems are widespread in the human brain. Highly synergistic subsystems typically sit between canonical functional networks, and may serve an integrative role. We then use simulated annealing to find maximally synergistic subsystems, finding that such systems typically comprise ≈10 brain regions, recruited from multiple canonical brain systems. Though ubiquitous, highly synergistic subsystems are invisible when considering pairwise functional connectivity, suggesting that higher-order dependencies form a kind of shadow structure that has been unrecognized by established network-based analyses. We assert that higher-order interactions in the brain represent an under-explored space that, accessible with tools of multivariate information theory, may offer novel scientific insights.
Collapse
Affiliation(s)
- Thomas F Varley
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA.
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA.
| | - Maria Pope
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| | - Joshua Faskowitz
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| | - Olaf Sporns
- School of Informatics, Computing & Engineering, Indiana University, Bloomington, IN, 47405, USA
- Department of Psychological & Brain Sciences, Indiana University, Bloomington, IN, 47405, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, 47405, USA
| |
Collapse
|
9
|
Chiarion G, Sparacino L, Antonacci Y, Faes L, Mesin L. Connectivity Analysis in EEG Data: A Tutorial Review of the State of the Art and Emerging Trends. Bioengineering (Basel) 2023; 10:bioengineering10030372. [PMID: 36978763 PMCID: PMC10044923 DOI: 10.3390/bioengineering10030372] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/10/2023] [Accepted: 03/14/2023] [Indexed: 03/30/2023] Open
Abstract
Understanding how different areas of the human brain communicate with each other is a crucial issue in neuroscience. The concepts of structural, functional and effective connectivity have been widely exploited to describe the human connectome, consisting of brain networks, their structural connections and functional interactions. Despite high-spatial-resolution imaging techniques such as functional magnetic resonance imaging (fMRI) being widely used to map this complex network of multiple interactions, electroencephalographic (EEG) recordings claim high temporal resolution and are thus perfectly suitable to describe either spatially distributed and temporally dynamic patterns of neural activation and connectivity. In this work, we provide a technical account and a categorization of the most-used data-driven approaches to assess brain-functional connectivity, intended as the study of the statistical dependencies between the recorded EEG signals. Different pairwise and multivariate, as well as directed and non-directed connectivity metrics are discussed with a pros-cons approach, in the time, frequency, and information-theoretic domains. The establishment of conceptual and mathematical relationships between metrics from these three frameworks, and the discussion of novel methodological approaches, will allow the reader to go deep into the problem of inferring functional connectivity in complex networks. Furthermore, emerging trends for the description of extended forms of connectivity (e.g., high-order interactions) are also discussed, along with graph-theory tools exploring the topological properties of the network of connections provided by the proposed metrics. Applications to EEG data are reviewed. In addition, the importance of source localization, and the impacts of signal acquisition and pre-processing techniques (e.g., filtering, source localization, and artifact rejection) on the connectivity estimates are recognized and discussed. By going through this review, the reader could delve deeply into the entire process of EEG pre-processing and analysis for the study of brain functional connectivity and learning, thereby exploiting novel methodologies and approaches to the problem of inferring connectivity within complex networks.
Collapse
Affiliation(s)
- Giovanni Chiarion
- Mathematical Biology and Physiology, Department Electronics and Telecommunications, Politecnico di Torino, 10129 Turin, Italy
| | - Laura Sparacino
- Department of Engineering, University of Palermo, 90128 Palermo, Italy
| | - Yuri Antonacci
- Department of Engineering, University of Palermo, 90128 Palermo, Italy
| | - Luca Faes
- Department of Engineering, University of Palermo, 90128 Palermo, Italy
| | - Luca Mesin
- Mathematical Biology and Physiology, Department Electronics and Telecommunications, Politecnico di Torino, 10129 Turin, Italy
| |
Collapse
|
10
|
Zhang W, Yin M, Jiang M, Dai Q. Partitioned estimation methodology of biological neuronal networks with topology-based module detection. Comput Biol Med 2023; 154:106552. [PMID: 36738704 DOI: 10.1016/j.compbiomed.2023.106552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/27/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
Parameter estimation of neuronal networks is closely related with information processing mechanisms in neural systems. Estimation of synaptic parameters for neuronal networks was an time consuming task. Due to complex interactions between neurons, computational efficiency and accuracy of estimation methods is relatively low. Meanwhile, inherent topological properties such as core-periphery and modular structures are not fully considered in estimation. In order to improve the efficiency and accuracy of estimation, this study proposes a two-stage PartitionMLE method which introduces detected neuronal modules as topological constraints in estimation. The proposed PartitionMLE method firstly decomposes the system into multiple non-overlapping neuronal modules, by performing topology-based module detection. Dynamic parameters including intra-modular and inter-modular parameters are estimated in two stages, using detected hubs to connect non-overlapping neuronal modules. The contributions of PartitionMLE method are two-folds: reducing estimation errors and improving the model interpretability. Experiments about neuronal networks consisting of Hodgkin-Huxley (HH) and leaky integrate-and-firing (LIF) neurons validated the effectiveness of the PartitionMLE method, with comparison to the single-stage MLE method.
Collapse
Affiliation(s)
- Wei Zhang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| | - Muqi Yin
- Institute of Cyber-Systems and Control, Zhejiang University, Zheda Road 38, Hangzhou, 310027, China
| | - Mingfeng Jiang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China
| | - Qi Dai
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| |
Collapse
|
11
|
Varley TF, Sporns O, Schaffelhofer S, Scherberger H, Dann B. Information-processing dynamics in neural networks of macaque cerebral cortex reflect cognitive state and behavior. Proc Natl Acad Sci U S A 2023; 120:e2207677120. [PMID: 36603032 PMCID: PMC9926243 DOI: 10.1073/pnas.2207677120] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 11/29/2022] [Indexed: 01/06/2023] Open
Abstract
One of the essential functions of biological neural networks is the processing of information. This includes everything from processing sensory information to perceive the environment, up to processing motor information to interact with the environment. Due to methodological limitations, it has been historically unclear how information processing changes during different cognitive or behavioral states and to what extent information is processed within or between the network of neurons in different brain areas. In this study, we leverage recent advances in the calculation of information dynamics to explore neural-level processing within and between the frontoparietal areas AIP, F5, and M1 during a delayed grasping task performed by three macaque monkeys. While information processing was high within all areas during all cognitive and behavioral states of the task, interareal processing varied widely: During visuomotor transformation, AIP and F5 formed a reciprocally connected processing unit, while no processing was present between areas during the memory period. Movement execution was processed globally across all areas with predominance of processing in the feedback direction. Furthermore, the fine-scale network structure reconfigured at the neuron level in response to different grasping conditions, despite no differences in the overall amount of information present. These results suggest that areas dynamically form higher-order processing units according to the cognitive or behavioral demand and that the information-processing network is hierarchically organized at the neuron level, with the coarse network structure determining the behavioral state and finer changes reflecting different conditions.
Collapse
Affiliation(s)
- Thomas F. Varley
- Department of Psychological & Brain Sciences, Indiana University47405-7007, Bloomington, IN
- School of Informatics, Computing, and Engineering, Indiana University47405-7007, Bloomington, IN
| | - Olaf Sporns
- Department of Psychological & Brain Sciences, Indiana University47405-7007, Bloomington, IN
| | - Stefan Schaffelhofer
- Neurobiology Laboratory, German Primate Center37077, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen37073, Goettingen, Germany
| | - Hansjörg Scherberger
- Neurobiology Laboratory, German Primate Center37077, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen37073, Goettingen, Germany
| | - Benjamin Dann
- Neurobiology Laboratory, German Primate Center37077, Goettingen, Germany
| |
Collapse
|
12
|
Varley TF, Kaminski P. Untangling Synergistic Effects of Intersecting Social Identities with Partial Information Decomposition. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1387. [PMID: 37420406 PMCID: PMC9611752 DOI: 10.3390/e24101387] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/17/2022] [Accepted: 09/22/2022] [Indexed: 05/10/2023]
Abstract
The theory of intersectionality proposes that an individual's experience of society has aspects that are irreducible to the sum of one's various identities considered individually, but are "greater than the sum of their parts". In recent years, this framework has become a frequent topic of discussion both in social sciences and among popular movements for social justice. In this work, we show that the effects of intersectional identities can be statistically observed in empirical data using information theory, particularly the partial information decomposition framework. We show that, when considering the predictive relationship between various identity categories such as race and sex, on outcomes such as income, health and wellness, robust statistical synergies appear. These synergies show that there are joint-effects of identities on outcomes that are irreducible to any identity considered individually and only appear when specific categories are considered together (for example, there is a large, synergistic effect of race and sex considered jointly on income irreducible to either race or sex). Furthermore, these synergies are robust over time, remaining largely constant year-to-year. We then show using synthetic data that the most widely used method of assessing intersectionalities in data (linear regression with multiplicative interaction coefficients) fails to disambiguate between truly synergistic, greater-than-the-sum-of-their-parts interactions, and redundant interactions. We explore the significance of these two distinct types of interactions in the context of making inferences about intersectional relationships in data and the importance of being able to reliably differentiate the two. Finally, we conclude that information theory, as a model-free framework sensitive to nonlinearities and synergies in data, is a natural method by which to explore the space of higher-order social dynamics.
Collapse
Affiliation(s)
- Thomas F. Varley
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA
- Department of Psychology & Brain Sciences, Indiana University, Bloomington, IN 47405, USA
| | - Patrick Kaminski
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47405, USA
- Department of Sociology, Indiana University, Bloomington, IN 47405, USA
| |
Collapse
|
13
|
Varley TF, Hoel E. Emergence as the conversion of information: a unifying theory. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20210150. [PMID: 35599561 PMCID: PMC9131462 DOI: 10.1098/rsta.2021.0150] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 08/24/2021] [Indexed: 05/25/2023]
Abstract
Is reduction always a good scientific strategy? The existence of the special sciences above physics suggests not. Previous research has shown that dimensionality reduction (macroscales) can increase the dependency between elements of a system (a phenomenon called 'causal emergence'). Here, we provide an umbrella mathematical framework for emergence based on information conversion. We show evidence that coarse-graining can convert information from one 'type' to another. We demonstrate this using the well-understood mutual information measure applied to Boolean networks. Using partial information decomposition, the mutual information can be decomposed into redundant, unique and synergistic information atoms. Then by introducing a novel measure of the synergy bias of a given decomposition, we are able to show that the synergy component of a Boolean network's mutual information can increase at macroscales. This can occur even when there is no difference in the total mutual information between a macroscale and its underlying microscale, proving information conversion. We relate this broad framework to previous work, compare it to other theories, and argue it complexifies any notion of universal reduction in the sciences, since such reduction would likely lead to a loss of synergistic information in scientific models. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
| | - Erik Hoel
- Allen Discovery Center, Tufts University, Medford, MA, USA
| |
Collapse
|
14
|
Newman EL, Varley TF, Parakkattu VK, Sherrill SP, Beggs JM. Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition. ENTROPY (BASEL, SWITZERLAND) 2022; 24:930. [PMID: 35885153 PMCID: PMC9319160 DOI: 10.3390/e24070930] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022]
Abstract
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the "higher-order" information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure-function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.
Collapse
Affiliation(s)
- Ehren L. Newman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
| | - Thomas F. Varley
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
| | - Vibin K. Parakkattu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA;
| | | | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, USA;
| |
Collapse
|
15
|
Petkoski S, Jirsa VK. Normalizing the brain connectome for communication through synchronization. Netw Neurosci 2022; 6:722-744. [PMID: 36607179 PMCID: PMC9810372 DOI: 10.1162/netn_a_00231] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 01/10/2022] [Indexed: 01/10/2023] Open
Abstract
Networks in neuroscience determine how brain function unfolds, and their perturbations lead to psychiatric disorders and brain disease. Brain networks are characterized by their connectomes, which comprise the totality of all connections, and are commonly described by graph theory. This approach is deeply rooted in a particle view of information processing, based on the quantification of informational bits such as firing rates. Oscillations and brain rhythms demand, however, a wave perspective of information processing based on synchronization. We extend traditional graph theory to a dual, particle-wave, perspective, integrate time delays due to finite transmission speeds, and derive a normalization of the connectome. When applied to the database of the Human Connectome Project, it explains the emergence of frequency-specific network cores including the visual and default mode networks. These findings are robust across human subjects (N = 100) and are a fundamental network property within the wave picture. The normalized connectome comprises the particle view in the limit of infinite transmission speeds and opens the applicability of graph theory to a wide range of novel network phenomena, including physiological and pathological brain rhythms. These two perspectives are orthogonal, but not incommensurable, when understood within the novel, here-proposed, generalized framework of structural connectivity.
Collapse
Affiliation(s)
- Spase Petkoski
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Viktor K. Jirsa
- Aix-Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
16
|
Antonello PC, Varley TF, Beggs J, Porcionatto M, Sporns O, Faber J. Self-organization of in vitro neuronal assemblies drives to complex network topology. eLife 2022; 11:74921. [PMID: 35708741 PMCID: PMC9203058 DOI: 10.7554/elife.74921] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/01/2022] [Indexed: 12/17/2022] Open
Abstract
Activity-dependent self-organization plays an important role in the formation of specific and stereotyped connectivity patterns in neural circuits. By combining neuronal cultures, and tools with approaches from network neuroscience and information theory, we can study how complex network topology emerges from local neuronal interactions. We constructed effective connectivity networks using a transfer entropy analysis of spike trains recorded from rat embryo dissociated hippocampal neuron cultures between 6 and 35 days in vitro to investigate how the topology evolves during maturation. The methodology for constructing the networks considered the synapse delay and addressed the influence of firing rate and population bursts as well as spurious effects on the inference of connections. We found that the number of links in the networks grew over the course of development, shifting from a segregated to a more integrated architecture. As part of this progression, three significant aspects of complex network topology emerged. In agreement with previous in silico and in vitro studies, a small-world architecture was detected, largely due to strong clustering among neurons. Additionally, the networks developed in a modular topology, with most modules comprising nearby neurons. Finally, highly active neurons acquired topological characteristics that made them important nodes to the network and integrators of modules. These findings leverage new insights into how neuronal effective network topology relates to neuronal assembly self-organization mechanisms.
Collapse
Affiliation(s)
- Priscila C Antonello
- Department of Biochemistry - Escola Paulista de Medicina - Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Thomas F Varley
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, United States.,Department of Informatics, Computing, and Engineering, Indiana University, Bloomington, United States
| | - John Beggs
- Department of Physics, Indiana University, Bloomington, United States
| | - Marimélia Porcionatto
- Department of Biochemistry - Escola Paulista de Medicina - Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, United States
| | - Jean Faber
- Department of Neurology and Neurosurgery - Escola Paulista de Medicina - Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| |
Collapse
|
17
|
Shorten DP, Priesemann V, Wibral M, Lizier JT. Early lock-in of structured and specialised information flows during neural development. eLife 2022; 11:74651. [PMID: 35286256 PMCID: PMC9064303 DOI: 10.7554/elife.74651] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 03/13/2022] [Indexed: 11/13/2022] Open
Abstract
The brains of many organisms are capable of complicated distributed computation underpinned by a highly advanced information processing capacity. Although substantial progress has been made towards characterising the information flow component of this capacity in mature brains, there is a distinct lack of work characterising its emergence during neural development. This lack of progress has been largely driven by the lack of effective estimators of information processing operations for spiking data. Here, we leverage recent advances in this estimation task in order to quantify the changes in transfer entropy during development. We do so by studying the changes in the intrinsic dynamics of the spontaneous activity of developing dissociated neural cell cultures. We find that the quantity of information flowing across these networks undergoes a dramatic increase across development. Moreover, the spatial structure of these flows exhibits a tendency to lock-in at the point when they arise. We also characterise the flow of information during the crucial periods of population bursts. We find that, during these bursts, nodes tend to undertake specialised computational roles as either transmitters, mediators, or receivers of information, with these roles tending to align with their average spike ordering. Further, we find that these roles are regularly locked-in when the information flows are established. Finally, we compare these results to information flows in a model network developing according to a spike-timing-dependent plasticity learning rule. Similar temporal patterns in the development of information flows were observed in these networks, hinting at the broader generality of these phenomena.
Collapse
Affiliation(s)
- David P Shorten
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Campus Institute for Dynamics of Biological Networks, Georg August University, Göttingen, Germany
| | - Joseph T Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
18
|
Pakman A, Nejatbakhsh A, Gilboa D, Makkeh A, Mazzucato L, Wibral M, Schneidman E. Estimating the Unique Information of Continuous Variables. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:20295-20307. [PMID: 35645551 PMCID: PMC9137417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The integration and transfer of information from multiple sources to multiple targets is a core motive of neural systems. The emerging field of partial information decomposition (PID) provides a novel information-theoretic lens into these mechanisms by identifying synergistic, redundant, and unique contributions to the mutual information between one and several variables. While many works have studied aspects of PID for Gaussian and discrete distributions, the case of general continuous distributions is still uncharted territory. In this work we present a method for estimating the unique information in continuous distributions, for the case of one versus two variables. Our method solves the associated optimization problem over the space of distributions with fixed bivariate marginals by combining copula decompositions and techniques developed to optimize variational autoencoders. We obtain excellent agreement with known analytic results for Gaussians, and illustrate the power of our new approach in several brain-inspired neural models. Our method is capable of recovering the effective connectivity of a chaotic network of rate neurons, and uncovers a complex trade-off between redundancy, synergy and unique information in recurrent networks trained to solve a generalized XOR task.
Collapse
|
19
|
Nobukawa S, Nishimura H, Wagatsuma N, Ando S, Yamanishi T. Long-Tailed Characteristic of Spiking Pattern Alternation Induced by Log-Normal Excitatory Synaptic Distribution. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3525-3537. [PMID: 32822305 DOI: 10.1109/tnnls.2020.3015208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studies of structural connectivity at the synaptic level show that in synaptic connections of the cerebral cortex, the excitatory postsynaptic potential (EPSP) in most synapses exhibits sub-mV values, while a small number of synapses exhibit large EPSPs ( >~1.0 [mV]). This means that the distribution of EPSP fits a log-normal distribution. While not restricting structural connectivity, skewed and long-tailed distributions have been widely observed in neural activities, such as the occurrences of spiking rates and the size of a synchronously spiking population. Many studies have been modeled this long-tailed EPSP neural activity distribution; however, its causal factors remain controversial. This study focused on the long-tailed EPSP distributions and interlateral synaptic connections primarily observed in the cortical network structures, thereby having constructed a spiking neural network consistent with these features. Especially, we constructed two coupled modules of spiking neural networks with excitatory and inhibitory neural populations with a log-normal EPSP distribution. We evaluated the spiking activities for different input frequencies and with/without strong synaptic connections. These coupled modules exhibited intermittent intermodule-alternative behavior, given moderate input frequency and the existence of strong synaptic and intermodule connections. Moreover, the power analysis, multiscale entropy analysis, and surrogate data analysis revealed that the long-tailed EPSP distribution and intermodule connections enhanced the complexity of spiking activity at large temporal scales and induced nonlinear dynamics and neural activity that followed the long-tailed distribution.
Collapse
|
20
|
Sherrill SP, Timme NM, Beggs JM, Newman EL. Partial information decomposition reveals that synergistic neural integration is greater downstream of recurrent information flow in organotypic cortical cultures. PLoS Comput Biol 2021; 17:e1009196. [PMID: 34252081 PMCID: PMC8297941 DOI: 10.1371/journal.pcbi.1009196] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 07/22/2021] [Accepted: 06/18/2021] [Indexed: 11/22/2022] Open
Abstract
The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration–a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration–determined from partial information decomposition–varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow. Networks compute information. That is, they modify inputs to generate distinct outputs. These computations are an important component of network information processing. Knowing how the routing of information in a network influences computation is therefore crucial. Here we asked how a key form of computation—synergistic integration—is related to the direction of local information flow in networks of spiking cortical neurons. Specifically, we asked how information flow between input neurons (i.e., recurrent information flow) and information flow from output neurons to input neurons (i.e., feedback information flow) was related to the amount of synergistic integration performed by output neurons. We found that greater synergistic integration occurred where there was more recurrent information flow. And, lesser synergistic integration occurred where there was more feedback information flow relative to feedforward information flow. These results show that computation, in the form of synergistic integration, is distinctly influenced by the directionality of local information flow. Such work is valuable for predicting where and how network computation occurs and for designing networks with desired computational abilities.
Collapse
Affiliation(s)
- Samantha P. Sherrill
- Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
- * E-mail: (SPS); (ELN)
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States of America
| | - John M. Beggs
- Department of Physics & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America
- * E-mail: (SPS); (ELN)
| |
Collapse
|
21
|
Inhibitory neurons exhibit high controlling ability in the cortical microconnectome. PLoS Comput Biol 2021; 17:e1008846. [PMID: 33831009 PMCID: PMC8031186 DOI: 10.1371/journal.pcbi.1008846] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 03/01/2021] [Indexed: 02/08/2023] Open
Abstract
The brain is a network system in which excitatory and inhibitory neurons keep activity balanced in the highly non-random connectivity pattern of the microconnectome. It is well known that the relative percentage of inhibitory neurons is much smaller than excitatory neurons in the cortex. So, in general, how inhibitory neurons can keep the balance with the surrounding excitatory neurons is an important question. There is much accumulated knowledge about this fundamental question. This study quantitatively evaluated the relatively higher functional contribution of inhibitory neurons in terms of not only properties of individual neurons, such as firing rate, but also in terms of topological mechanisms and controlling ability on other excitatory neurons. We combined simultaneous electrical recording (~2.5 hours) of ~1000 neurons in vitro, and quantitative evaluation of neuronal interactions including excitatory-inhibitory categorization. This study accurately defined recording brain anatomical targets, such as brain regions and cortical layers, by inter-referring MRI and immunostaining recordings. The interaction networks enabled us to quantify topological influence of individual neurons, in terms of controlling ability to other neurons. Especially, the result indicated that highly influential inhibitory neurons show higher controlling ability of other neurons than excitatory neurons, and are relatively often distributed in deeper layers of the cortex. Furthermore, the neurons having high controlling ability are more effectively limited in number than central nodes of k-cores, and these neurons also participate in more clustered motifs. In summary, this study suggested that the high controlling ability of inhibitory neurons is a key mechanism to keep balance with a large number of other excitatory neurons beyond simple higher firing rate. Application of the selection method of limited important neurons would be also applicable for the ability to effectively and selectively stimulate E/I imbalanced disease states. How small numbers of inhibitory neurons functionally keep balance with large numbers of excitatory neurons in the brain by controlling each other is a fundamental question. Especially, this study quantitatively evaluated a topological mechanism of interaction networks in terms of controlling abilities of individual cortical neurons to other neurons. Combination of simultaneous electrical recording of ~1000 neurons and a quantitative evaluation method of neuronal interactions including excitatory-inhibitory categories, enabled us to evaluate the influence of individual neurons not only about firing rate but also about their relative positions in the networks and controllable ability of other neurons. Especially, the result showed that inhibitory neurons have more controlling ability than excitatory neurons, and such neurons were more often observed in deep layers. Because the limited number of neurons in terms controlling ability were much smaller than neurons based on centrality measure and, of course, more directly selected neurons based on their ability to control other neurons, the selection method of important neurons will help not only to produce realistic computational models but also will help to stimulate brain to effectively treat imbalanced disease states.
Collapse
|
22
|
Shorten DP, Spinney RE, Lizier JT. Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other Event-Based Data. PLoS Comput Biol 2021; 17:e1008054. [PMID: 33872296 PMCID: PMC8084348 DOI: 10.1371/journal.pcbi.1008054] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 04/29/2021] [Accepted: 02/19/2021] [Indexed: 11/24/2022] Open
Abstract
Transfer entropy (TE) is a widely used measure of directed information flows in a number of domains including neuroscience. Many real-world time series for which we are interested in information flows come in the form of (near) instantaneous events occurring over time. Examples include the spiking of biological neurons, trades on stock markets and posts to social media, amongst myriad other systems involving events in continuous time throughout the natural and social sciences. However, there exist severe limitations to the current approach to TE estimation on such event-based data via discretising the time series into time bins: it is not consistent, has high bias, converges slowly and cannot simultaneously capture relationships that occur with very fine time precision as well as those that occur over long time intervals. Building on recent work which derived a theoretical framework for TE in continuous time, we present an estimation framework for TE on event-based data and develop a k-nearest-neighbours estimator within this framework. This estimator is provably consistent, has favourable bias properties and converges orders of magnitude more quickly than the current state-of-the-art in discrete-time estimation on synthetic examples. We demonstrate failures of the traditionally-used source-time-shift method for null surrogate generation. In order to overcome these failures, we develop a local permutation scheme for generating surrogate time series conforming to the appropriate null hypothesis in order to test for the statistical significance of the TE and, as such, test for the conditional independence between the history of one point process and the updates of another. Our approach is shown to be capable of correctly rejecting or accepting the null hypothesis of conditional independence even in the presence of strong pairwise time-directed correlations. This capacity to accurately test for conditional independence is further demonstrated on models of a spiking neural circuit inspired by the pyloric circuit of the crustacean stomatogastric ganglion, succeeding where previous related estimators have failed.
Collapse
Affiliation(s)
- David P. Shorten
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Richard E. Spinney
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- School of Physics and EMBL Australia Node Single Molecule Science, School of Medical Sciences, The University of New South Wales, Sydney, Australia
| | - Joseph T. Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
23
|
Anagnostopoulou A, Styliadis C, Kartsidis P, Romanopoulou E, Zilidou V, Karali C, Karagianni M, Klados M, Paraskevopoulos E, Bamidis PD. Computerized physical and cognitive training improves the functional architecture of the brain in adults with Down syndrome: A network science EEG study. Netw Neurosci 2021; 5:274-294. [PMID: 33688615 PMCID: PMC7935030 DOI: 10.1162/netn_a_00177] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 12/01/2020] [Indexed: 01/31/2023] Open
Abstract
Understanding the neuroplastic capacity of people with Down syndrome (PwDS) can potentially reveal the causal relationship between aberrant brain organization and phenotypic characteristics. We used resting-state EEG recordings to identify how a neuroplasticity-triggering training protocol relates to changes in the functional connectivity of the brain's intrinsic cortical networks. Brain activity of 12 PwDS before and after a 10-week protocol of combined physical and cognitive training was statistically compared to quantify changes in directed functional connectivity in conjunction with psychosomatometric assessments. PwDS showed increased connectivity within the left hemisphere and from left-to-right hemisphere, as well as increased physical and cognitive performance. Our findings reveal a strong adaptive neuroplastic reorganization as a result of the training that leads to a less-random network with a more pronounced hierarchical organization. Our results go beyond previous findings by indicating a transition to a healthier, more efficient, and flexible network architecture, with improved integration and segregation abilities in the brain of PwDS. Resting-state electrophysiological brain activity is used here for the first time to display meaningful relationships to underlying Down syndrome processes and outcomes of importance in a translational inquiry. This trial is registered with ClinicalTrials.gov Identifier NCT04390321.
Collapse
Affiliation(s)
- Alexandra Anagnostopoulou
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Charis Styliadis
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Panagiotis Kartsidis
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Evangelia Romanopoulou
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Vasiliki Zilidou
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Chrysi Karali
- School of Biology, Faculty of Science, Aristotle University of Thessaloniki, Greece
| | - Maria Karagianni
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Manousos Klados
- Department of Psychology, The University of Sheffield International Faculty, City College, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | - Panagiotis D Bamidis
- Medical Physics Laboratory, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| |
Collapse
|
24
|
Varley TF, Sporns O, Puce A, Beggs J. Differential effects of propofol and ketamine on critical brain dynamics. PLoS Comput Biol 2020; 16:e1008418. [PMID: 33347455 PMCID: PMC7785236 DOI: 10.1371/journal.pcbi.1008418] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 01/05/2021] [Accepted: 10/05/2020] [Indexed: 11/18/2022] Open
Abstract
Whether the brain operates at a critical "tipping" point is a long standing scientific question, with evidence from both cellular and systems-scale studies suggesting that the brain does sit in, or near, a critical regime. Neuroimaging studies of humans in altered states of consciousness have prompted the suggestion that maintenance of critical dynamics is necessary for the emergence of consciousness and complex cognition, and that reduced or disorganized consciousness may be associated with deviations from criticality. Unfortunately, many of the cellular-level studies reporting signs of criticality were performed in non-conscious systems (in vitro neuronal cultures) or unconscious animals (e.g. anaesthetized rats). Here we attempted to address this knowledge gap by exploring critical brain dynamics in invasive ECoG recordings from multiple sessions with a single macaque as the animal transitioned from consciousness to unconsciousness under different anaesthetics (ketamine and propofol). We use a previously-validated test of criticality: avalanche dynamics to assess the differences in brain dynamics between normal consciousness and both drug-states. Propofol and ketamine were selected due to their differential effects on consciousness (ketamine, but not propofol, is known to induce an unusual state known as "dissociative anaesthesia"). Our analyses indicate that propofol dramatically restricted the size and duration of avalanches, while ketamine allowed for more awake-like dynamics to persist. In addition, propofol, but not ketamine, triggered a large reduction in the complexity of brain dynamics. All states, however, showed some signs of persistent criticality when testing for exponent relations and universal shape-collapse. Further, maintenance of critical brain dynamics may be important for regulation and control of conscious awareness.
Collapse
Affiliation(s)
- Thomas F. Varley
- Psychological & Brain Sciences, Indiana University, Bloomington, Indiana, USA
- School of Informatics, Indiana University, Bloomington, Indiana, USA
| | - Olaf Sporns
- Psychological & Brain Sciences, Indiana University, Bloomington, Indiana, USA
| | - Aina Puce
- Psychological & Brain Sciences, Indiana University, Bloomington, Indiana, USA
| | - John Beggs
- Department of Physics, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
25
|
Kunert-Graf J, Sakhanenko N, Galas D. Partial Information Decomposition and the Information Delta: A Geometric Unification Disentangling Non-Pairwise Information. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1333. [PMID: 33266517 PMCID: PMC7760044 DOI: 10.3390/e22121333] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 11/12/2020] [Accepted: 11/19/2020] [Indexed: 01/01/2023]
Abstract
Information theory provides robust measures of multivariable interdependence, but classically does little to characterize the multivariable relationships it detects. The Partial Information Decomposition (PID) characterizes the mutual information between variables by decomposing it into unique, redundant, and synergistic components. This has been usefully applied, particularly in neuroscience, but there is currently no generally accepted method for its computation. Independently, the Information Delta framework characterizes non-pairwise dependencies in genetic datasets. This framework has developed an intuitive geometric interpretation for how discrete functions encode information, but lacks some important generalizations. This paper shows that the PID and Delta frameworks are largely equivalent. We equate their key expressions, allowing for results in one framework to apply towards open questions in the other. For example, we find that the approach of Bertschinger et al. is useful for the open Information Delta question of how to deal with linkage disequilibrium. We also show how PID solutions can be mapped onto the space of delta measures. Using Bertschinger et al. as an example solution, we identify a specific plane in delta-space on which this approach's optimization is constrained, and compute it for all possible three-variable discrete functions of a three-letter alphabet. This yields a clear geometric picture of how a given solution decomposes information.
Collapse
Affiliation(s)
- James Kunert-Graf
- Pacific Northwest Research Institute, Seattle, WA 98122, USA; (N.S.); (D.G.)
| | | | | |
Collapse
|
26
|
Ju H, Kim JZ, Beggs JM, Bassett DS. Network structure of cascading neural systems predicts stimulus propagation and recovery. J Neural Eng 2020; 17:056045. [PMID: 33036007 PMCID: PMC11191848 DOI: 10.1088/1741-2552/abbff1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.
Collapse
Affiliation(s)
- Harang Ju
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - Jason Z Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - John M Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, United States of America
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, United States of America
| |
Collapse
|
27
|
Srivastava P, Nozari E, Kim JZ, Ju H, Zhou D, Becker C, Pasqualetti F, Pappas GJ, Bassett DS. Models of communication and control for brain networks: distinctions, convergence, and future outlook. Netw Neurosci 2020; 4:1122-1159. [PMID: 33195951 PMCID: PMC7655113 DOI: 10.1162/netn_a_00158] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 07/21/2020] [Indexed: 12/13/2022] Open
Abstract
Recent advances in computational models of signal propagation and routing in the human brain have underscored the critical role of white-matter structure. A complementary approach has utilized the framework of network control theory to better understand how white matter constrains the manner in which a region or set of regions can direct or control the activity of other regions. Despite the potential for both of these approaches to enhance our understanding of the role of network structure in brain function, little work has sought to understand the relations between them. Here, we seek to explicitly bridge computational models of communication and principles of network control in a conceptual review of the current literature. By drawing comparisons between communication and control models in terms of the level of abstraction, the dynamical complexity, the dependence on network attributes, and the interplay of multiple spatiotemporal scales, we highlight the convergence of and distinctions between the two frameworks. Based on the understanding of the intertwined nature of communication and control in human brain networks, this work provides an integrative perspective for the field and outlines exciting directions for future work.
Collapse
Affiliation(s)
- Pragya Srivastava
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Erfan Nozari
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Jason Z. Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Harang Ju
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Dale Zhou
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Cassiano Becker
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Fabio Pasqualetti
- Department of Mechanical Engineering, University of California, Riverside, CA USA
| | - George J. Pappas
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Danielle S. Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA USA
- Department of Neurology, University of Pennsylvania, Philadelphia, PA USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA USA
- Santa Fe Institute, Santa Fe, NM USA
| |
Collapse
|
28
|
Sherrill SP, Timme NM, Beggs JM, Newman EL. Correlated activity favors synergistic processing in local cortical networks in vitro at synaptically relevant timescales. Netw Neurosci 2020; 4:678-697. [PMID: 32885121 PMCID: PMC7462423 DOI: 10.1162/netn_a_00141] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 04/06/2020] [Indexed: 11/19/2022] Open
Abstract
Neural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05-14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission-of which synergistic processing is a component-that resulted as mutual information grew. However, at extrasynaptic windows (up to 3,000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high-correlation regime, synergistic processing decreases with greater correlation.
Collapse
Affiliation(s)
- Samantha P. Sherrill
- Department of Psychological and Brain Sciences and Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M. Beggs
- Department of Physics & Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences and Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
29
|
A Method to Present and Analyze Ensembles of Information Sources. ENTROPY 2020; 22:e22050580. [PMID: 33286352 PMCID: PMC7517101 DOI: 10.3390/e22050580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 05/18/2020] [Indexed: 01/22/2023]
Abstract
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided.
Collapse
|
30
|
Abstract
Neural systems are composed of many local processors that generate an output given their many inputs as specified by a transfer function. This paper studies a transfer function that is fundamentally asymmetric and builds on multi-site intracellular recordings indicating that some neocortical pyramidal cells can function as context-sensitive two-point processors in which some inputs modulate the strength with which they transmit information about other inputs. Learning and processing at the level of the local processor can then be guided by the context of activity in the system as a whole without corrupting the message that the local processor transmits. We use a recent advance in the foundations of information theory to compare the properties of this modulatory transfer function with that of the simple arithmetic operators. This advance enables the information transmitted by processors with two distinct inputs to be decomposed into those components unique to each input, that shared between the two inputs, and that which depends on both though it is in neither, i.e., synergy. We show that contextual modulation is fundamentally asymmetric, contrasts with all four simple arithmetic operators, can take various forms, and can occur together with the anatomical asymmetry that defines pyramidal neurons in mammalian neocortex.
Collapse
|
31
|
Novelli L, Atay FM, Jost J, Lizier JT. Deriving pairwise transfer entropy from network structure and motifs. Proc Math Phys Eng Sci 2020; 476:20190779. [PMID: 32398937 PMCID: PMC7209155 DOI: 10.1098/rspa.2019.0779] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Accepted: 03/24/2020] [Indexed: 11/12/2022] Open
Abstract
Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This relationship is studied using a discrete-time linearly coupled Gaussian model, which allows us to derive the TE for each link from the network topology. It is shown analytically that the dependence on the directed link weight is only a first approximation, valid for weak coupling. More generally, the TE increases with the in-degree of the source and decreases with the in-degree of the target, indicating an asymmetry of information transfer between hubs and low-degree nodes. In addition, the TE is directly proportional to weighted motif counts involving common parents or multiple walks from the source to the target, which are more abundant in networks with a high clustering coefficient than in random networks. Our findings also apply to Granger causality, which is equivalent to TE for Gaussian variables. Moreover, similar empirical results on random Boolean networks suggest that the dependence of the TE on the in-degree extends to nonlinear dynamics.
Collapse
Affiliation(s)
- Leonardo Novelli
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Fatihcan M. Atay
- Department of Mathematics, Bilkent University, 06800 Ankara, Turkey
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
- Santa Fe Institute for the Sciences of Complexity, Santa Fe, New Mexico 87501, USA
| | - Joseph T. Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| |
Collapse
|
32
|
Finn C, Lizier JT. Generalised Measures of Multivariate Information Content. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E216. [PMID: 33285991 PMCID: PMC7851747 DOI: 10.3390/e22020216] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 02/05/2020] [Accepted: 02/12/2020] [Indexed: 12/12/2022]
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content.
Collapse
Affiliation(s)
- Conor Finn
- Centre for Complex Systems, The University of Sydney, Sydney NSW 2006, Australia;
- CSIRO Data61, Marsfield NSW 2122, Australia
| | - Joseph T. Lizier
- Centre for Complex Systems, The University of Sydney, Sydney NSW 2006, Australia;
| |
Collapse
|
33
|
Li M, Han Y, Aburn MJ, Breakspear M, Poldrack RA, Shine JM, Lizier JT. Transitions in information processing dynamics at the whole-brain network level are driven by alterations in neural gain. PLoS Comput Biol 2019; 15:e1006957. [PMID: 31613882 PMCID: PMC6793849 DOI: 10.1371/journal.pcbi.1006957] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 09/02/2019] [Indexed: 12/20/2022] Open
Abstract
A key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system, via changes in neural gain (in terms of the amplification and non-linearity in stimulus-response transfer function of brain regions). In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain parameters led to a 'critical' transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain parameters would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.
Collapse
Affiliation(s)
- Mike Li
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
- Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Yinuo Han
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
| | - Matthew J. Aburn
- QIMR Berghofer Medical Research Institute, Queensland, Australia
| | | | - Russell A. Poldrack
- Department of Psychology, Stanford University, Stanford, California, United States of America
| | - James M. Shine
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, Australia
| | - Joseph T. Lizier
- Centre for Complex Systems, The University of Sydney, Sydney, Australia
- Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Sydney, Australia
| |
Collapse
|
34
|
Gu Y, Qi Y, Gong P. Rich-club connectivity, diverse population coupling, and dynamical activity patterns emerging from local cortical circuits. PLoS Comput Biol 2019; 15:e1006902. [PMID: 30939135 PMCID: PMC6461296 DOI: 10.1371/journal.pcbi.1006902] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 04/12/2019] [Accepted: 02/25/2019] [Indexed: 11/19/2022] Open
Abstract
Experimental studies have begun revealing essential properties of the structural connectivity and the spatiotemporal activity dynamics of cortical circuits. To integrate these properties from anatomy and physiology, and to elucidate the links between them, we develop a novel cortical circuit model that captures a range of realistic features of synaptic connectivity. We show that the model accounts for the emergence of higher-order connectivity structures, including highly connected hub neurons that form an interconnected rich-club. The circuit model exhibits a rich repertoire of dynamical activity states, ranging from asynchronous to localized and global propagating wave states. We find that around the transition between asynchronous and localized propagating wave states, our model quantitatively reproduces a variety of major empirical findings regarding neural spatiotemporal dynamics, which otherwise remain disjointed in existing studies. These dynamics include diverse coupling (correlation) between spiking activity of individual neurons and the population, dynamical wave patterns with variable speeds and precise temporal structures of neural spikes. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to the diverse population coupling. These findings establish an integrated account of structural connectivity and activity dynamics of local cortical circuits, and provide new insights into understanding their working mechanisms. To integrate essential anatomical and physiological properties of local cortical circuits and to elucidate mechanistic links between them, we develop a novel circuit model capturing key synaptic connectivity features. We show that the model explains the emergence of a range of connectivity patterns such as rich-club connectivity, and gives rise to a rich repertoire of cortical states. We identify both the anatomical and physiological mechanisms underlying the transition of these cortical states, and show that our model reconciles an otherwise disparate set of key physiological findings on neural activity dynamics. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to diverse neural population correlations as observed recently. Our model thus provides a framework for integrating and explaining a variety of neural connectivity properties and spatiotemporal activity dynamics observed in experimental studies, and provides novel experimentally testable predictions.
Collapse
Affiliation(s)
- Yifan Gu
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Yang Qi
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Pulin Gong
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
- * E-mail:
| |
Collapse
|
35
|
Ray SK, Valentini G, Shah P, Haque A, Reid CR, Weber GF, Garnier S. Information Transfer During Food Choice in the Slime Mold Physarum polycephalum. Front Ecol Evol 2019. [DOI: 10.3389/fevo.2019.00067] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|
36
|
Faber SP, Timme NM, Beggs JM, Newman EL. Computation is concentrated in rich clubs of local cortical networks. Netw Neurosci 2019; 3:384-404. [PMID: 30793088 PMCID: PMC6370472 DOI: 10.1162/netn_a_00069] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 08/30/2018] [Indexed: 01/08/2023] Open
Abstract
To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation.
Collapse
Affiliation(s)
- Samantha P. Faber
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M. Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
37
|
Faber SP, Timme NM, Beggs JM, Newman EL. Computation is concentrated in rich clubs of local cortical networks. Netw Neurosci 2019. [PMID: 30793088 DOI: 10.1101/290981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2023] Open
Abstract
To understand how neural circuits process information, it is essential to identify the relationship between computation and circuit organization. Rich clubs, highly interconnected sets of neurons, are known to propagate a disproportionate amount of information within cortical circuits. Here, we test the hypothesis that rich clubs also perform a disproportionate amount of computation. To do so, we recorded the spiking activity of on average ∼300 well-isolated individual neurons from organotypic cortical cultures. We then constructed weighted, directed networks reflecting the effective connectivity between the neurons. For each neuron, we quantified the amount of computation it performed based on its inputs. We found that rich-club neurons compute ∼160% more information than neurons outside of the rich club. The amount of computation performed in the rich club was proportional to the amount of information propagation by the same neurons. This suggests that in these circuits, information propagation drives computation. In total, our findings indicate that rich-club organization in effective cortical circuits supports not only information propagation but also neural computation.
Collapse
Affiliation(s)
- Samantha P Faber
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L Newman
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
38
|
Wu S, Zhang Y, Cui Y, Li H, Wang J, Guo L, Xia Y, Yao D, Xu P, Guo D. Heterogeneity of synaptic input connectivity regulates spike-based neuronal avalanches. Neural Netw 2018; 110:91-103. [PMID: 30508808 DOI: 10.1016/j.neunet.2018.10.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 09/26/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. A central topic in neuroscience is to elucidate the underlying circuitry mechanisms of neuronal avalanches in the brain. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modeling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
Collapse
Affiliation(s)
- Shengdun Wu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yangsong Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yan Cui
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Heng Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Jiakang Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Lijun Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Yang Xia
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China
| | - Daqing Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, People's Republic of China.
| |
Collapse
|
39
|
Walker BL, Newhall KA. Inferring information flow in spike-train data sets using a trial-shuffle method. PLoS One 2018; 13:e0206977. [PMID: 30403739 PMCID: PMC6221339 DOI: 10.1371/journal.pone.0206977] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 10/23/2018] [Indexed: 11/18/2022] Open
Abstract
Understanding information processing in the brain requires the ability to determine the functional connectivity between the different regions of the brain. We present a method using transfer entropy to extract this flow of information between brain regions from spike-train data commonly obtained in neurological experiments. Transfer entropy is a statistical measure based in information theory that attempts to quantify the information flow from one process to another, and has been applied to find connectivity in simulated spike-train data. Due to statistical error in the estimator, inferring functional connectivity requires a method for determining significance in the transfer entropy values. We discuss the issues with numerical estimation of transfer entropy and resulting challenges in determining significance before presenting the trial-shuffle method as a viable option. The trial-shuffle method, for spike-train data that is split into multiple trials, determines significant transfer entropy values independently for each individual pair of neurons by comparing to a created baseline distribution using a rigorous statistical test. This is in contrast to either globally comparing all neuron transfer entropy values or comparing pairwise values to a single baseline value. In establishing the viability of this method by comparison to several alternative approaches in the literature, we find evidence that preserving the inter-spike-interval timing is important. We then use the trial-shuffle method to investigate information flow within a model network as we vary model parameters. This includes investigating the global flow of information within a connectivity network divided into two well-connected subnetworks, going beyond local transfer of information between pairs of neurons.
Collapse
Affiliation(s)
- Benjamin L. Walker
- Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
| | - Katherine A. Newhall
- Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- * E-mail:
| |
Collapse
|
40
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
41
|
Chambers B, Levy M, Dechery JB, MacLean JN. Ensemble stacking mitigates biases in inference of synaptic connectivity. Netw Neurosci 2018; 2:60-85. [PMID: 29911678 PMCID: PMC5989998 DOI: 10.1162/netn_a_00032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/11/2017] [Indexed: 01/26/2023] Open
Abstract
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Collapse
Affiliation(s)
- Brendan Chambers
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Maayan Levy
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Joseph B Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA.,Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
42
|
Dopaminergic modulation of hemodynamic signal variability and the functional connectome during cognitive performance. Neuroimage 2018; 172:341-356. [DOI: 10.1016/j.neuroimage.2018.01.048] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 01/15/2018] [Accepted: 01/18/2018] [Indexed: 11/19/2022] Open
|
43
|
Timme NM, Lapish C. A Tutorial for Information Theory in Neuroscience. eNeuro 2018; 5:ENEURO.0052-18.2018. [PMID: 30211307 PMCID: PMC6131830 DOI: 10.1523/eneuro.0052-18.2018] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/10/2018] [Accepted: 05/30/2018] [Indexed: 11/21/2022] Open
Abstract
Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study.
Collapse
Affiliation(s)
- Nicholas M Timme
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| | - Christopher Lapish
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, Indianapolis, IN 46202
| |
Collapse
|
44
|
Lizier JT, Bertschinger N, Jost J, Wibral M. Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work. ENTROPY 2018; 20:e20040307. [PMID: 33265398 PMCID: PMC7512824 DOI: 10.3390/e20040307] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Revised: 04/19/2018] [Accepted: 04/19/2018] [Indexed: 11/29/2022]
Abstract
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field.
Collapse
Affiliation(s)
- Joseph T. Lizier
- Complex Systems Research Group and Centre for Complex Systems, Faculty of Engineering & IT, The University of Sydney, NSW 2006, Australia
- Correspondence: ; Tel.:+61-2-9351-3208
| | - Nils Bertschinger
- Frankfurt Institute of Advanced Studies (FIAS) and Goethe University, 60438 Frankfurt am Main, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
| | - Michael Wibral
- MEG Unit, Brain Imaging Center, Goethe University, 60528 Frankfurt, Germany
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
| |
Collapse
|
45
|
Chicharro D, Pica G, Panzeri S. The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy. ENTROPY (BASEL, SWITZERLAND) 2018; 20:e20030169. [PMID: 33265260 PMCID: PMC7512685 DOI: 10.3390/e20030169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/26/2018] [Accepted: 02/28/2018] [Indexed: 06/12/2023]
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed.
Collapse
Affiliation(s)
- Daniel Chicharro
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Giuseppe Pica
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
| |
Collapse
|
46
|
Efficient communication dynamics on macro-connectome, and the propagation speed. Sci Rep 2018; 8:2510. [PMID: 29410439 PMCID: PMC5802747 DOI: 10.1038/s41598-018-20591-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 01/22/2018] [Indexed: 01/21/2023] Open
Abstract
Global communication dynamics in the brain can be captured using fMRI, MEG, or electrocorticography (ECoG), and the global slow dynamics often represent anatomical constraints. Complementary single-/multi-unit recordings have described local fast temporal dynamics. However, global fast temporal dynamics remain incompletely understood with considering of anatomical constraints. Therefore, we compared temporal aspects of cross-area propagations of single-unit recordings and ECoG, and investigated their anatomical bases. First, we demonstrated how both evoked and spontaneous ECoGs can accurately predict latencies of single-unit recordings. Next, we estimated the propagation velocity (1.0–1.5 m/s) from brain-wide data and found that it was fairly stable among different conscious levels. We also found that the shortest paths in anatomical topology strongly predicted the latencies. Finally, we demonstrated that Communicability, a novel graph-theoretic measure, is able to quantify that more than 90% of paths should use shortest paths and the remaining are non-shortest walks. These results revealed that macro-connectome is efficiently wired for detailed communication dynamics in the brain.
Collapse
|
47
|
Blackwell JM, Geffen MN. Progress and challenges for understanding the function of cortical microcircuits in auditory processing. Nat Commun 2017; 8:2165. [PMID: 29255268 PMCID: PMC5735136 DOI: 10.1038/s41467-017-01755-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 10/12/2017] [Indexed: 12/21/2022] Open
Abstract
An important outstanding question in auditory neuroscience is to identify the mechanisms by which specific motifs within inter-connected neural circuits affect auditory processing and, ultimately, behavior. In the auditory cortex, a combination of large-scale electrophysiological recordings and concurrent optogenetic manipulations are improving our understanding of the role of inhibitory–excitatory interactions. At the same time, computational approaches have grown to incorporate diverse neuronal types and connectivity patterns. However, we are still far from understanding how cortical microcircuits encode and transmit information about complex acoustic scenes. In this review, we focus on recent results identifying the special function of different cortical neurons in the auditory cortex and discuss a computational framework for future work that incorporates ideas from network science and network dynamics toward the coding of complex auditory scenes. Advances in multi-neuron recordings and optogenetic manipulation have resulted in an interrogation of the function of specific cortical cell types in auditory cortex during sound processing. Here, the authors review this literature and discuss the merits of integrating computational approaches from dynamic network science.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Neuroscience Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
48
|
|
49
|
Partial and Entropic Information Decompositions of a Neuronal Modulatory Interaction. ENTROPY 2017. [DOI: 10.3390/e19110560] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
50
|
Ito T, Kulkarni KR, Schultz DH, Mill RD, Chen RH, Solomyak LI, Cole MW. Cognitive task information is transferred between brain regions via resting-state network topology. Nat Commun 2017; 8:1027. [PMID: 29044112 PMCID: PMC5715061 DOI: 10.1038/s41467-017-01000-w] [Citation(s) in RCA: 111] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 08/10/2017] [Indexed: 11/17/2022] Open
Abstract
Resting-state network connectivity has been associated with a variety of cognitive abilities, yet it remains unclear how these connectivity properties might contribute to the neurocognitive computations underlying these abilities. We developed a new approach—information transfer mapping—to test the hypothesis that resting-state functional network topology describes the computational mappings between brain regions that carry cognitive task information. Here, we report that the transfer of diverse, task-rule information in distributed brain regions can be predicted based on estimated activity flow through resting-state network connections. Further, we find that these task-rule information transfers are coordinated by global hub regions within cognitive control networks. Activity flow over resting-state connections thus provides a large-scale network mechanism for cognitive task information transfer and global information coordination in the human brain, demonstrating the cognitive relevance of resting-state network topology. Resting-state functional connections have been associated with cognitive abilities but it is unclear how these connections contribute to cognition. Here Ito et al present a new approach, information transfer mapping, showing that task-relevant information can be predicted by estimated activity flow through resting-state networks.
Collapse
Affiliation(s)
- Takuya Ito
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA. .,Behavioral and Neural Sciences Graduate Program, Rutgers University, Newark, NJ, 07102, USA.
| | - Kaustubh R Kulkarni
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Douglas H Schultz
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Ravi D Mill
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Richard H Chen
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA.,Behavioral and Neural Sciences Graduate Program, Rutgers University, Newark, NJ, 07102, USA
| | - Levi I Solomyak
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Michael W Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| |
Collapse
|