1
|
Fakhar K, Hadaeghi F, Seguin C, Dixit S, Messé A, Zamora-López G, Misic B, Hilgetag CC. A general framework for characterizing optimal communication in brain networks. eLife 2025; 13:RP101780. [PMID: 40244650 PMCID: PMC12005722 DOI: 10.7554/elife.101780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2025] Open
Abstract
Efficient communication in brain networks is foundational for cognitive function and behavior. However, how communication efficiency is defined depends on the assumed model of signaling dynamics, e.g., shortest path signaling, random walker navigation, broadcasting, and diffusive processes. Thus, a general and model-agnostic framework for characterizing optimal neural communication is needed. We address this challenge by assigning communication efficiency through a virtual multi-site lesioning regime combined with game theory, applied to large-scale models of human brain dynamics. Our framework quantifies the exact influence each node exerts over every other, generating optimal influence maps given the underlying model of neural dynamics. These descriptions reveal how communication patterns unfold if regions are set to maximize their influence over one another. Comparing these maps with a variety of brain communication models showed that optimal communication closely resembles a broadcasting regime in which regions leverage multiple parallel channels for information dissemination. Moreover, we found that the brain's most influential regions are its rich-club, exploiting their topological vantage point by broadcasting across numerous pathways that enhance their reach even if the underlying connections are weak. Altogether, our work provides a rigorous and versatile framework for characterizing optimal brain communication, and uncovers the most influential brain regions, and the topological features underlying their influence.
Collapse
Affiliation(s)
- Kayson Fakhar
- MRC Cognition and Brain Sciences Unit, University of CambridgeCambridgeUnited Kingdom
- Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of NeuroscienceHamburgGermany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of NeuroscienceHamburgGermany
| | - Caio Seguin
- Department of Psychological and Brain Sciences, Indiana UniversityBloomingtonUnited States
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of NeuroscienceHamburgGermany
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- International Max Planck Research School on Cognitive NeuroimagingBarcelonaSpain
| | - Arnaud Messé
- Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of NeuroscienceHamburgGermany
| | - Gorka Zamora-López
- Center for Brain and Cognition, Pompeu Fabra UniversityBarcelonaSpain
- Department of Information and Communication Technologies, Pompeu Fabra UniversityBarcelonaSpain
| | - Bratislav Misic
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill UniversityMontréalCanada
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf-Hamburg, Hamburg University, Hamburg Center of NeuroscienceHamburgGermany
- Department of Health Sciences, Boston UniversityBostonUnited States
| |
Collapse
|
2
|
Mao H, Hasse BA, Schwartz AB. Hybrid Neural Network Models Explain Cortical Neuronal Activity During Volitional Movement. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.02.20.636945. [PMID: 40027649 PMCID: PMC11870545 DOI: 10.1101/2025.02.20.636945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Massive interconnectivity in large-scale neural networks is the key feature underlying their powerful and complex functionality. We have developed hybrid neural network (HNN) models that allow us to find statistical structure in this connectivity. Describing this structure is critical for understanding biological and artificial neural networks. The HNNs are composed of artificial neurons, a subset of which are trained to reproduce the responses of individual neurons recorded experimentally. The experimentally observed firing rates came from populations of neurons recorded in the motor cortices of monkeys performing a reaching task. After training, these networks (recurrent and spiking) underwent the same state transitions as those observed in the empirical data, a result that helps resolve a long-standing question of prescribed vs ongoing control of volitional movement. Because all aspects of the models are exposed, we were able to analyze the dynamic statistics of the connections between neurons. Our results show that the dynamics of extrinsic input to the network changed this connectivity to cause the state transitions. Two processes at the synaptic level were recognized: one in which many different neurons contributed to a buildup of membrane potential and another in which more specific neurons triggered an action potential. HNNs facilitate modeling of realistic neuron-neuron connectivity and provide foundational descriptions of large-scale network functionality.
Collapse
Affiliation(s)
- Hongwei Mao
- Department of Neurobiology, University of Pittsburgh School of Medicine
| | - Brady A. Hasse
- Department of Neurobiology, University of Pittsburgh School of Medicine
| | - Andrew B. Schwartz
- Department of Neurobiology, University of Pittsburgh School of Medicine
- Systems Neuroscience Center, University of Pittsburgh School of Medicine
- Department of Bioengineering, University of Pittsburgh School of Engineering
| |
Collapse
|
3
|
Laasch N, Braun W, Knoff L, Bielecki J, Hilgetag CC. Comparison of derivative-based and correlation-based methods to estimate effective connectivity in neural networks. Sci Rep 2025; 15:5357. [PMID: 39948086 PMCID: PMC11825726 DOI: 10.1038/s41598-025-88596-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 01/29/2025] [Indexed: 02/16/2025] Open
Abstract
Inferring and understanding the underlying connectivity structure of a system solely from the observed activity of its constituent components is a challenge in many areas of science. In neuroscience, techniques for estimating connectivity are paramount when attempting to understand the network structure of neural systems from their recorded activity patterns. To date, no universally accepted method exists for the inference of effective connectivity, which describes how the activity of a neural node mechanistically affects the activity of other nodes. Here, focussing on purely excitatory networks of small to intermediate size and continuous node dynamics, we provide a systematic comparison of different approaches for estimating effective connectivity. Starting with the Hopf neuron model in conjunction with known ground truth structural connectivity, we reconstruct the system's connectivity matrix using a variety of algorithms. We show that, in sparse non-linear networks with delays, combining a lagged-cross-correlation (LCC) approach with a recently published derivative-based covariance analysis method provides the most reliable estimation of the known ground truth connectivity matrix. We outline how the parameters of the Hopf model, including those controlling the bifurcation, noise, and delay distribution, affect this result. We also show that in linear networks, LCC has comparable performance to a method based on transfer entropy, at a drastically lower computational cost. We highlight that LCC works best for small sparse networks, and show how performance decreases in larger and less sparse networks. Applying the method to linear dynamics without time delays, we find that it does not outperform derivative-based methods. We comment on this finding in light of recent theoretical results for such systems. Employing the Hopf model, we then use the estimated structural connectivity matrix as the basis for a forward simulation of the system dynamics, in order to recreate the observed node activity patterns. We show that, under certain conditions, the best method, LCC, results in higher trace-to-trace correlations than derivative-based methods for sparse noise-driven systems. Finally, we apply the LCC method to empirical biological data. Choosing a suitable threshold for binarization, we reconstruct the structural connectivity of a subset of the nervous system of the nematode C. elegans. We show that the computationally simple LCC method performs better than another recently published, computationally more expensive reservoir computing-based method. We apply different methods to this dataset and find that they all lead to similar performances. Our results show that a comparatively simple method can be used to reliably estimate directed effective connectivity in sparse neural systems in the presence of spatio-temporal delays and noise. We provide concrete suggestions for the estimation of effective connectivity in a scenario common in biological research, where only neuronal activity of a small set of neurons, but not connectivity or single-neuron and synapse dynamics, are known.
Collapse
Affiliation(s)
- Niklas Laasch
- Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| | - Wilhelm Braun
- Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany.
| | - Lisa Knoff
- Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Jan Bielecki
- Faculty of Engineering, Kiel University, Kaiserstrasse 2, 24143, Kiel, Germany
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, Center for Experimental Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
- Department of Health Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA, 02215, USA
| |
Collapse
|
4
|
Béna G, Goodman DFM. Dynamics of specialization in neural modules under resource constraints. Nat Commun 2025; 16:187. [PMID: 39746951 PMCID: PMC11695987 DOI: 10.1038/s41467-024-55188-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/02/2024] [Indexed: 01/04/2025] Open
Abstract
The brain is structurally and functionally modular, although recent evidence has raised questions about the extent of both types of modularity. Using a simple, toy artificial neural network setup that allows for precise control, we find that structural modularity does not in general guarantee functional specialization (across multiple measures of specialization). Further, in this setup (1) specialization only emerges when features of the environment are meaningfully separable, (2) specialization preferentially emerges when the network is strongly resource-constrained, and (3) these findings are qualitatively similar across several different variations of network architectures. Finally, we show that functional specialization varies dynamically across time, and these dynamics depend on both the timing and bandwidth of information flow in the network. We conclude that a static notion of specialization is likely too simple a framework for understanding intelligence in situations of real-world complexity, from biology to brain-inspired neuromorphic systems.
Collapse
|
5
|
Hu Q, Tang R, He X, Wang R. General relationship of local topologies, global dynamics, and bifurcation in cellular networks. NPJ Syst Biol Appl 2024; 10:135. [PMID: 39557967 PMCID: PMC11573990 DOI: 10.1038/s41540-024-00470-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 11/07/2024] [Indexed: 11/20/2024] Open
Abstract
Cellular networks realize their functions by integrating intricate information embedded within local structures such as regulatory paths and feedback loops. However, the precise mechanisms of how local topologies determine global network dynamics and induce bifurcations remain unidentified. A critical step in unraveling the integration is to identify the governing principles, which underlie the mechanisms of information flow. Here, we develop the cumulative linearized approximation (CLA) algorithm to address this issue. Based on perturbation analysis and network decomposition, we theoretically demonstrate how perturbations affect the equilibrium variations through the integration of all regulatory paths and how stability of the equilibria is determined by distinct feedback loops. Two illustrative examples, i.e., a three-variable bistable system and a more intricate epithelial-mesenchymal transition (EMT) network, are chosen to validate the feasibility of this approach. These results establish a solid foundation for understanding information flow across cellular networks, highlighting the critical roles of local topologies in determining global network dynamics and the emergence of bifurcations within these networks. This work introduces a novel framework for investigating the general relationship between local topologies and global dynamics of cellular networks under perturbations.
Collapse
Affiliation(s)
- Qing Hu
- Department of Mathematics, Shanghai University, Shanghai, 200444, China
| | - Ruoyu Tang
- Department of Mathematics, Shanghai University, Shanghai, 200444, China
| | - Xinyu He
- Department of Mathematics, Shanghai University, Shanghai, 200444, China
| | - Ruiqi Wang
- Department of Mathematics, Shanghai University, Shanghai, 200444, China.
- Newtouch Center for Mathematics of Shanghai University, Shanghai, 200444, China.
| |
Collapse
|
6
|
Luppi AI, Rosas FE, Mediano PAM, Menon DK, Stamatakis EA. Information decomposition and the informational architecture of the brain. Trends Cogn Sci 2024; 28:352-368. [PMID: 38199949 DOI: 10.1016/j.tics.2023.11.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/09/2023] [Accepted: 11/17/2023] [Indexed: 01/12/2024]
Abstract
To explain how the brain orchestrates information-processing for cognition, we must understand information itself. Importantly, information is not a monolithic entity. Information decomposition techniques provide a way to split information into its constituent elements: unique, redundant, and synergistic information. We review how disentangling synergistic and redundant interactions is redefining our understanding of integrative brain function and its neural organisation. To explain how the brain navigates the trade-offs between redundancy and synergy, we review converging evidence integrating the structural, molecular, and functional underpinnings of synergy and redundancy; their roles in cognition and computation; and how they might arise over evolution and development. Overall, disentangling synergistic and redundant information provides a guiding principle for understanding the informational architecture of the brain and cognition.
Collapse
Affiliation(s)
- Andrea I Luppi
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK; Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Fernando E Rosas
- Department of Informatics, University of Sussex, Brighton, UK; Centre for Psychedelic Research, Department of Brain Sciences, Imperial College London, London, UK; Centre for Complexity Science, Imperial College London, London, UK; Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK
| | - Pedro A M Mediano
- Department of Computing, Imperial College London, London, UK; Department of Psychology, University of Cambridge, Cambridge, UK
| | - David K Menon
- Department of Medicine, University of Cambridge, Cambridge, UK; Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, University of Cambridge, Cambridge, UK; Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.
| |
Collapse
|
7
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. Downstream network transformations dissociate neural activity from causal functional contributions. Sci Rep 2024; 14:2103. [PMID: 38267481 PMCID: PMC10808222 DOI: 10.1038/s41598-024-52423-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/18/2024] [Indexed: 01/26/2024] Open
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neural networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany.
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Konrad P Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
8
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. When Neural Activity Fails to Reveal Causal Contributions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.06.543895. [PMID: 37333375 PMCID: PMC10274733 DOI: 10.1101/2023.06.06.543895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neuronal networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Konrad P. Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|