1
|
Celotto M, Bím J, Tlaie A, De Feo V, Lemke S, Chicharro D, Nili H, Bieler M, Hanganu-Opatz IL, Donner TH, Brovelli A, Panzeri S. An information-theoretic quantification of the content of communication between brain regions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.14.544903. [PMID: 37398375 PMCID: PMC10312682 DOI: 10.1101/2023.06.14.544903] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Quantifying the amount, content and direction of communication between brain regions is key to understanding brain function. Traditional methods to analyze brain activity based on the Wiener-Granger causality principle quantify the overall information propagated by neural activity between simultaneously recorded brain regions, but do not reveal the information flow about specific features of interest (such as sensory stimuli). Here, we develop a new information theoretic measure termed Feature-specific Information Transfer (FIT), quantifying how much information about a specific feature flows between two regions. FIT merges the Wiener-Granger causality principle with information-content specificity. We first derive FIT and prove analytically its key properties. We then illustrate and test them with simulations of neural activity, demonstrating that FIT identifies, within the total information flowing between regions, the information that is transmitted about specific features. We then analyze three neural datasets obtained with different recording methods, magneto- and electro-encephalography, and spiking activity, to demonstrate the ability of FIT to uncover the content and direction of information flow between brain regions beyond what can be discerned with traditional anaytical methods. FIT can improve our understanding of how brain regions communicate by uncovering previously hidden feature-specific information flow.
Collapse
Affiliation(s)
- Marco Celotto
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Jan Bím
- Datamole, s. r. o, Vitezne namesti 577/2 Dejvice, 160 00 Praha 6, The Czech Republic
| | - Alejandro Tlaie
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
| | - Vito De Feo
- Artificial Intelligence Team, Future Health Technology, and Brain-Computer Interfaces laboratories, School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK
| | - Stefan Lemke
- Department of Cell Biology and Physiology, University of North Carolina, Chapel Hill, United States
| | - Daniel Chicharro
- Department of Computer Science, City, University of London, London, UK
| | - Hamed Nili
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Malte Bieler
- Mobile Technology Lab, School of Economics, Innovation and Technology, University College Kristiania, Oslo, Norway
| | - Ileana L. Hanganu-Opatz
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center, Hamburg-Eppendorf, Hamburg, Germany
| | - Tobias H. Donner
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, UMR 7289, Aix Marseille Université, CNRS, Marseille, France
| | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
| |
Collapse
|
2
|
Discovering Higher-Order Interactions Through Neural Information Decomposition. ENTROPY 2021; 23:e23010079. [PMID: 33430463 PMCID: PMC7827712 DOI: 10.3390/e23010079] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 12/21/2020] [Accepted: 12/25/2020] [Indexed: 11/25/2022]
Abstract
If regularity in data takes the form of higher-order functions among groups of variables, models which are biased towards lower-order functions may easily mistake the data for noise. To distinguish whether this is the case, one must be able to quantify the contribution of different orders of dependence to the total information. Recent work in information theory attempts to do this through measures of multivariate mutual information (MMI) and information decomposition (ID). Despite substantial theoretical progress, practical issues related to tractability and learnability of higher-order functions are still largely unaddressed. In this work, we introduce a new approach to information decomposition—termed Neural Information Decomposition (NID)—which is both theoretically grounded, and can be efficiently estimated in practice using neural networks. We show on synthetic data that NID can learn to distinguish higher-order functions from noise, while many unsupervised probability models cannot. Additionally, we demonstrate the usefulness of this framework as a tool for exploring biological and artificial neural networks.
Collapse
|