1
|
Loosen AM, Kato A, Gu X. Revisiting the role of computational neuroimaging in the era of integrative neuroscience. Neuropsychopharmacology 2024; 50:103-113. [PMID: 39242921 PMCID: PMC11525590 DOI: 10.1038/s41386-024-01946-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 07/12/2024] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Computational models have become integral to human neuroimaging research, providing both mechanistic insights and predictive tools for human cognition and behavior. However, concerns persist regarding the ecological validity of lab-based neuroimaging studies and whether their spatiotemporal resolution is not sufficient for capturing neural dynamics. This review aims to re-examine the utility of computational neuroimaging, particularly in light of the growing prominence of alternative neuroscientific methods and the growing emphasis on more naturalistic behaviors and paradigms. Specifically, we will explore how computational modeling can both enhance the analysis of high-dimensional imaging datasets and, conversely, how neuroimaging, in conjunction with other data modalities, can inform computational models through the lens of neurobiological plausibility. Collectively, this evidence suggests that neuroimaging remains critical for human neuroscience research, and when enhanced by computational models, imaging can serve an important role in bridging levels of analysis and understanding. We conclude by proposing key directions for future research, emphasizing the development of standardized paradigms and the integrative use of computational modeling across neuroimaging techniques.
Collapse
Affiliation(s)
- Alisa M Loosen
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Ayaka Kato
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Xiaosi Gu
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
2
|
Li Y, Zhong Z. Decoding the application of deep learning in neuroscience: a bibliometric analysis. Front Comput Neurosci 2024; 18:1402689. [PMID: 39429248 PMCID: PMC11486706 DOI: 10.3389/fncom.2024.1402689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 09/25/2024] [Indexed: 10/22/2024] Open
Abstract
The application of deep learning in neuroscience holds unprecedented potential for unraveling the complex dynamics of the brain. Our bibliometric analysis, spanning from 2012 to 2023, delves into the integration of deep learning in neuroscience, shedding light on the evolutionary trends and identifying pivotal research hotspots. Through the examination of 421 articles, this study unveils a significant growth in interdisciplinary research, marked by the burgeoning application of deep learning techniques in understanding neural mechanisms and addressing neurological disorders. Central to our findings is the critical role of classification algorithms, models, and neural networks in advancing neuroscience, highlighting their efficacy in interpreting complex neural data, simulating brain functions, and translating theoretical insights into practical diagnostics and therapeutic interventions. Additionally, our analysis delineates a thematic evolution, showcasing a shift from foundational methodologies toward more specialized and nuanced approaches, particularly in areas like EEG analysis and convolutional neural networks. This evolution reflects the field's maturation and its adaptation to technological advancements. The study further emphasizes the importance of interdisciplinary collaborations and the adoption of cutting-edge technologies to foster innovation in decoding the cerebral code. The current study provides a strategic roadmap for future explorations, urging the scientific community toward areas ripe for breakthrough discoveries and practical applications. This analysis not only charts the past and present landscape of deep learning in neuroscience but also illuminates pathways for future research, underscoring the transformative impact of deep learning on our understanding of the brain.
Collapse
Affiliation(s)
- Yin Li
- Nanyang Institute of Technology, Nanyang, China
| | - Zilong Zhong
- Beijing Foreign Studies University, Beijing, China
| |
Collapse
|
3
|
Lin A, Witvliet D, Hernandez-Nunez L, Linderman SW, Samuel ADT, Venkatachalam V. Imaging whole-brain activity to understand behavior. NATURE REVIEWS. PHYSICS 2022; 4:292-305. [PMID: 37409001 PMCID: PMC10320740 DOI: 10.1038/s42254-022-00430-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/25/2022] [Indexed: 07/07/2023]
Abstract
The brain evolved to produce behaviors that help an animal inhabit the natural world. During natural behaviors, the brain is engaged in many levels of activity from the detection of sensory inputs to decision-making to motor planning and execution. To date, most brain studies have focused on small numbers of neurons that interact in limited circuits. This allows analyzing individual computations or steps of neural processing. During behavior, however, brain activity must integrate multiple circuits in different brain regions. The activities of different brain regions are not isolated, but may be contingent on one another. Coordinated and concurrent activity within and across brain areas is organized by (1) sensory information from the environment, (2) the animal's internal behavioral state, and (3) recurrent networks of synaptic and non-synaptic connectivity. Whole-brain recording with cellular resolution provides a new opportunity to dissect the neural basis of behavior, but whole-brain activity is also mutually contingent on behavior itself. This is especially true for natural behaviors like navigation, mating, or hunting, which require dynamic interaction between the animal, its environment, and other animals. In such behaviors, the sensory experience of an unrestrained animal is actively shaped by its movements and decisions. Many of the signaling and feedback pathways that an animal uses to guide behavior only occur in freely moving animals. Recent technological advances have enabled whole-brain recording in small behaving animals including nematodes, flies, and zebrafish. These whole-brain experiments capture neural activity with cellular resolution spanning sensory, decision-making, and motor circuits, and thereby demand new theoretical approaches that integrate brain dynamics with behavioral dynamics. Here, we review the experimental and theoretical methods that are being employed to understand animal behavior and whole-brain activity, and the opportunities for physics to contribute to this emerging field of systems neuroscience.
Collapse
Affiliation(s)
- Albert Lin
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Center for the Physics of Biological Function, Princeton University, Princeton, NJ, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Daniel Witvliet
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Luis Hernandez-Nunez
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Scott W Linderman
- Department of Statistics, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Aravinthan D T Samuel
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Vivek Venkatachalam
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Northeastern University, Boston, MA, USA
| |
Collapse
|
4
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
5
|
Functional Connectivity Basis and Underlying Cognitive Mechanisms for Gender Differences in Guilt Aversion. eNeuro 2021; 8:ENEURO.0226-21.2021. [PMID: 34819311 PMCID: PMC8675089 DOI: 10.1523/eneuro.0226-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 11/05/2021] [Accepted: 11/18/2021] [Indexed: 11/21/2022] Open
Abstract
Prosocial behavior is pivotal to our society. Guilt aversion, which describes the tendency to reduce the discrepancy between a partner's expectation and his/her actual outcome, drives human prosocial behavior as does well-known inequity aversion. Although women are reported to be more inequity averse than men, gender differences in guilt aversion remain unexplored. Here, we conducted a functional magnetic resonance imaging (fMRI) study (n = 52) and a large-scale online behavioral study (n = 4723) of a trust game designed to investigate guilt and inequity aversions. The fMRI study demonstrated that men exhibited stronger guilt aversion and recruited right dorsolateral prefrontal cortex (DLPFC)-ventromedial PFC (VMPFC) connectivity more for guilt aversion than women, while VMPFC-dorsal medial PFC (DMPFC) connectivity was commonly used in both genders. Furthermore, our regression analysis of the online behavioral data collected with Big Five and demographic factors replicated the gender differences and revealed that Big Five Conscientiousness (rule-based decision) correlated with guilt aversion only in men, but Agreeableness (empathetic consideration) correlated with guilt aversion in both genders. Thus, this study suggests that gender differences in prosocial behavior are heterogeneous depending on underlying motives in the brain and that the consideration of social norms plays a key role in the stronger guilt aversion in men.
Collapse
|
6
|
Williams AH, Linderman SW. Statistical neuroscience in the single trial limit. Curr Opin Neurobiol 2021; 70:193-205. [PMID: 34861596 DOI: 10.1016/j.conb.2021.10.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/24/2022]
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA
| | - Scott W Linderman
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA.
| |
Collapse
|
7
|
Amidi Y, Nazari B, Sadri S, Yousefi A. Parameter Estimation in Multiple Dynamic Synaptic Coupling Model Using Bayesian Point Process State-Space Modeling Framework. Neural Comput 2021; 33:1269-1299. [PMID: 33617745 DOI: 10.1162/neco_a_01375] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 11/23/2020] [Indexed: 11/04/2022]
Abstract
It is of great interest to characterize the spiking activity of individual neurons in a cell ensemble. Many different mechanisms, such as synaptic coupling and the spiking activity of itself and its neighbors, drive a cell's firing properties. Though this is a widely studied modeling problem, there is still room to develop modeling solutions by simplifications embedded in previous models. The first shortcut is that synaptic coupling mechanisms in previous models do not replicate the complex dynamics of the synaptic response. The second is that the number of synaptic connections in these models is an order of magnitude smaller than in an actual neuron. In this research, we push this barrier by incorporating a more accurate model of the synapse and propose a system identification solution that can scale to a network incorporating hundreds of synaptic connections. Although a neuron has hundreds of synaptic connections, only a subset of these connections significantly contributes to its spiking activity. As a result, we assume the synaptic connections are sparse, and to characterize these dynamics, we propose a Bayesian point-process state-space model that lets us incorporate the sparsity of synaptic connections within the regularization technique into our framework. We develop an extended expectation-maximization. algorithm to estimate the free parameters of the proposed model and demonstrate the application of this methodology to the problem of estimating the parameters of many dynamic synaptic connections. We then go through a simulation example consisting of the dynamic synapses across a range of parameter values and show that the model parameters can be estimated using our method. We also show the application of the proposed algorithm in the intracellular data that contains 96 presynaptic connections and assess the estimation accuracy of our method using a combination of goodness-of-fit measures.
Collapse
Affiliation(s)
- Yalda Amidi
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran, and Department of Neurology, Massachusetts General Hospital, and Harvard Medical School, Boston, MA 02114 U.S.A.
| | - Behzad Nazari
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Saeid Sadri
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Ali Yousefi
- Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA 01609, U.S.A.
| |
Collapse
|
8
|
Genkin M, Engel TA. Moving beyond generalization to accurate interpretation of flexible models. NAT MACH INTELL 2020; 2:674-683. [DOI: 10.1038/s42256-020-00242-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
9
|
Agrawal M, Peterson JC, Griffiths TL. Scaling up psychology via Scientific Regret Minimization. Proc Natl Acad Sci U S A 2020; 117:8825-8835. [PMID: 32241896 PMCID: PMC7183163 DOI: 10.1073/pnas.1915841117] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models-the biggest errors they make in predicting the data-to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting, instead, that the predictions of these data-driven models should be used to guide model building. We call this approach "Scientific Regret Minimization" (SRM), as it focuses on minimizing errors for cases that we know should have been predictable. We apply this exploratory method on a subset of the Moral Machine dataset, a public collection of roughly 40 million moral decisions. Using SRM, we find that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g., sex and age) improves a computational model of human moral judgment. Furthermore, we are able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.
Collapse
Affiliation(s)
- Mayank Agrawal
- Department of Psychology, Princeton University, Princeton, NJ 08544;
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
| | - Joshua C Peterson
- Department of Computer Science, Princeton University, Princeton, NJ 08544
| | - Thomas L Griffiths
- Department of Psychology, Princeton University, Princeton, NJ 08544
- Department of Computer Science, Princeton University, Princeton, NJ 08544
| |
Collapse
|
10
|
Whiteway MR, Butts DA. The quest for interpretable models of neural population activity. Curr Opin Neurobiol 2019; 58:86-93. [PMID: 31426024 DOI: 10.1016/j.conb.2019.07.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 07/14/2019] [Indexed: 11/24/2022]
Abstract
Many aspects of brain function arise from the coordinated activity of large populations of neurons. Recent developments in neural recording technologies are providing unprecedented access to the activity of such populations during increasingly complex experimental contexts; however, extracting scientific insights from such recordings requires the concurrent development of analytical tools that relate this population activity to system-level function. This is a primary motivation for latent variable models, which seek to provide a low-dimensional description of population activity that can be related to experimentally controlled variables, as well as uncontrolled variables such as internal states (e.g. attention and arousal) and elements of behavior. While deriving an understanding of function from traditional latent variable methods relies on low-dimensional visualizations, new approaches are targeting more interpretable descriptions of the components underlying system-level function.
Collapse
Affiliation(s)
- Matthew R Whiteway
- Zuckerman Mind Brain Behavior Institute, Jerome L Greene Science Center, Columbia University, 3227 Broadway, 5th Floor, Quad D, New York, NY 10027, USA
| | - Daniel A Butts
- Department of Biology and Program in Neuroscience and Cognitive Science, University of Maryland, 1210 Biology-Psychology Bldg. #144, College Park, MD 20742, USA.
| |
Collapse
|
11
|
Paninski L, Cunningham JP. Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2019; 50:232-241. [PMID: 29738986 DOI: 10.1016/j.conb.2018.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 03/12/2018] [Accepted: 04/06/2018] [Indexed: 01/01/2023]
Abstract
Modern large-scale multineuronal recording methodologies, including multielectrode arrays, calcium imaging, and optogenetic techniques, produce single-neuron resolution data of a magnitude and precision that were the realm of science fiction twenty years ago. The major bottlenecks in systems and circuit neuroscience no longer lie in simply collecting data from large neural populations, but also in understanding this data: developing novel scientific questions, with corresponding analysis techniques and experimental designs to fully harness these new capabilities and meaningfully interrogate these questions. Advances in methods for signal processing, network analysis, dimensionality reduction, and optimal control-developed in lockstep with advances in experimental neurotechnology-promise major breakthroughs in multiple fundamental neuroscience problems. These trends are clear in a broad array of subfields of modern neuroscience; this review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Collapse
Affiliation(s)
- L Paninski
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States; Department of Neuroscience, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States.
| | - J P Cunningham
- Department of Statistics, Grossman Center for the Statistics of Mind, Zuckerman Mind Brain Behavior Institute, Center for Theoretical Neuroscience, Columbia University, United States
| |
Collapse
|