1
|
Ishizu K, Nishimoto S, Ueoka Y, Funamizu A. Localized and global representation of prior value, sensory evidence, and choice in male mouse cerebral cortex. Nat Commun 2024; 15:4071. [PMID: 38778078 PMCID: PMC11111702 DOI: 10.1038/s41467-024-48338-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 04/26/2024] [Indexed: 05/25/2024] Open
Abstract
Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.
Collapse
Affiliation(s)
- Kotaro Ishizu
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
| | - Shosuke Nishimoto
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, 3-8-2, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Yutaro Ueoka
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
| | - Akihiro Funamizu
- Institute for Quantitative Biosciences, University of Tokyo, Laboratory of Neural Computation, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan.
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, 3-8-2, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.
| |
Collapse
|
2
|
Bajić D. Information Theory, Living Systems, and Communication Engineering. ENTROPY (BASEL, SWITZERLAND) 2024; 26:430. [PMID: 38785679 PMCID: PMC11120474 DOI: 10.3390/e26050430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/08/2024] [Accepted: 05/17/2024] [Indexed: 05/25/2024]
Abstract
Mainstream research on information theory within the field of living systems involves the application of analytical tools to understand a broad range of life processes. This paper is dedicated to an opposite problem: it explores the information theory and communication engineering methods that have counterparts in the data transmission process by way of DNA structures and neural fibers. Considering the requirements of modern multimedia, transmission methods chosen by nature may be different, suboptimal, or even far from optimal. However, nature is known for rational resource usage, so its methods have a significant advantage: they are proven to be sustainable. Perhaps understanding the engineering aspects of methods of nature can inspire a design of alternative green, stable, and low-cost transmission.
Collapse
Affiliation(s)
- Dragana Bajić
- Department of Communications and Signal Processing, Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovica 6, 21000 Novi Sad, Serbia
| |
Collapse
|
3
|
Zeldenrust F, Calcini N, Yan X, Bijlsma A, Celikel T. The tuning of tuning: How adaptation influences single cell information transfer. PLoS Comput Biol 2024; 20:e1012043. [PMID: 38739640 PMCID: PMC11115315 DOI: 10.1371/journal.pcbi.1012043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/23/2024] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Sensory neurons reconstruct the world from action potentials (spikes) impinging on them. To effectively transfer information about the stimulus to the next processing level, a neuron needs to be able to adapt its working range to the properties of the stimulus. Here, we focus on the intrinsic neural properties that influence information transfer in cortical neurons and how tightly their properties need to be tuned to the stimulus statistics for them to be effective. We start by measuring the intrinsic information encoding properties of putative excitatory and inhibitory neurons in L2/3 of the mouse barrel cortex. Excitatory neurons show high thresholds and strong adaptation, making them fire sparsely and resulting in a strong compression of information, whereas inhibitory neurons that favour fast spiking transfer more information. Next, we turn to computational modelling and ask how two properties influence information transfer: 1) spike-frequency adaptation and 2) the shape of the IV-curve. We find that a subthreshold (but not threshold) adaptation, the 'h-current', and a properly tuned leak conductance can increase the information transfer of a neuron, whereas threshold adaptation can increase its working range. Finally, we verify the effect of the IV-curve slope in our experimental recordings and show that excitatory neurons form a more heterogeneous population than inhibitory neurons. These relationships between intrinsic neural features and neural coding that had not been quantified before will aid computational, theoretical and systems neuroscientists in understanding how neuronal populations can alter their coding properties, such as through the impact of neuromodulators. Why the variability of intrinsic properties of excitatory neurons is larger than that of inhibitory ones is an exciting question, for which future research is needed.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen - the Netherlands
| | - Niccolò Calcini
- Maastricht Centre for Systems Biology (MaCSBio), University of Maastricht, Maastricht, The Netherlands
| | - Xuan Yan
- Institute of Neuroscience, Chinese Academy of Sciences, Beijing, China
| | - Ate Bijlsma
- Department of Population Health Sciences / Department of Biology, Universiteit Utrecht, the Netherlands
| | - Tansu Celikel
- School of Psychology, Georgia Institute of Technology, Atlanta - GA, United States of America
| |
Collapse
|
4
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Structure, dynamics, coding and optimal biophysical parameters of efficient excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuro-science, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we rigorously derive the structural, coding, biophysical and dynamical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-stimulus-specific excitatory external input regulating metabolic cost. The efficient network has excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implementing feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal biophysical parameters include 4 to 1 ratio of excitatory vs inhibitory neurons and 3 to 1 ratio of mean inhibitory-to-inhibitory vs. excitatory-to-inhibitory connectivity that closely match those of cortical sensory networks. The efficient network has biologically-plausible spiking dynamics, with a tight instantaneous E-I balance that makes them capable to achieve efficient coding of external stimuli varying over multiple time scales. Together, these results explain how efficient coding may be implemented in cortical networks and suggests that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
|
5
|
Chen Q, Ingram NT, Baudin J, Angueyra JM, Sinha R, Rieke F. Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.20.563304. [PMID: 37961603 PMCID: PMC10634684 DOI: 10.1101/2023.10.20.563304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Computation in neural circuits relies on judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this hampers our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Norianne T. Ingram
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | | | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
6
|
Rodenkirch C, Wang Q. Optimization of Temporal Coding of Tactile Information in Rat Thalamus by Locus Coeruleus Activation. BIOLOGY 2024; 13:79. [PMID: 38392298 PMCID: PMC10886390 DOI: 10.3390/biology13020079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024]
Abstract
The brainstem noradrenergic nucleus, the locus coeruleus (LC), exerts heavy influences on sensory processing, perception, and cognition through its diffuse projections throughout the brain. Previous studies have demonstrated that LC activation modulates the response and feature selectivity of thalamic relay neurons. However, the extent to which LC modulates the temporal coding of sensory information in the thalamus remains mostly unknown. Here, we found that LC stimulation significantly altered the temporal structure of the responses of the thalamic relay neurons to repeated whisker stimulation. A substantial portion of events (i.e., time points where the stimulus reliably evoked spikes as evidenced by dramatic elevations in the firing rate of the spike density function) were removed during LC stimulation, but many new events emerged. Interestingly, spikes within the emerged events have a higher feature selectivity, and therefore transmit more information about a tactile stimulus, than spikes within the removed events. This suggests that LC stimulation optimized the temporal coding of tactile information to improve information transmission. We further reconstructed the original whisker stimulus from a population of thalamic relay neurons' responses and corresponding feature selectivity. As expected, we found that reconstruction from thalamic responses was more accurate using spike trains of thalamic neurons recorded during LC stimulation than without LC stimulation, functionally confirming LC optimization of the thalamic temporal code. Together, our results demonstrated that activation of the LC-NE system optimizes temporal coding of sensory stimulus in the thalamus, presumably allowing for more accurate decoding of the stimulus in the downstream brain structures.
Collapse
Affiliation(s)
- Charles Rodenkirch
- Department of Biomedical Engineering, Columbia University, ET 351, 500 W. 120th Street, New York, NY 10027, USA
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, ET 351, 500 W. 120th Street, New York, NY 10027, USA
| |
Collapse
|
7
|
Greenidge CD, Scholl B, Yates JL, Pillow JW. Efficient Decoding of Large-Scale Neural Population Responses With Gaussian-Process Multiclass Regression. Neural Comput 2024; 36:175-226. [PMID: 38101329 DOI: 10.1162/neco_a_01630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 08/09/2022] [Indexed: 12/17/2023]
Abstract
Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the gaussian process multiclass decoder (GPMD), is well suited to decoding a continuous low-dimensional variable from high-dimensional population activity and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in data sets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three data sets and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.
Collapse
Affiliation(s)
| | - Benjamin Scholl
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA 19104, U.S.A.
| | - Jacob L Yates
- University of California, Berkeley, School of Optometry, Berkeley, CA 94720, U.S.A.
| | | |
Collapse
|
8
|
Frost BL, Mintchev SM. A high-efficiency model indicating the role of inhibition in the resilience of neuronal networks to damage resulting from traumatic injury. J Comput Neurosci 2023; 51:463-474. [PMID: 37632630 DOI: 10.1007/s10827-023-00860-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 06/02/2023] [Accepted: 08/17/2023] [Indexed: 08/28/2023]
Abstract
Recent investigations of traumatic brain injuries have shown that these injuries can result in conformational changes at the level of individual neurons in the cerebral cortex. Focal axonal swelling is one consequence of such injuries and leads to a variable width along the cell axon. Simulations of the electrical properties of axons impacted in such a way show that this damage may have a nonlinear deleterious effect on spike-encoded signal transmission. The computational cost of these simulations complicates the investigation of the effects of such damage at a network level. We have developed an efficient algorithm that faithfully reproduces the spike train filtering properties seen in physical simulations. We use this algorithm to explore the impact of focal axonal swelling on small networks of integrate and fire neurons. We explore also the effects of architecture modifications to networks impacted in this manner. In all tested networks, our results indicate that the addition of presynaptic inhibitory neurons either increases or leaves unchanged the fidelity, in terms of bandwidth, of the network's processing properties with respect to this damage.
Collapse
Affiliation(s)
- Brian L Frost
- Electrical Engineering, Columbia University, 500 W 120th St, New York, NY, 10027, USA.
| | | |
Collapse
|
9
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. Curr Biol 2023; 33:4470-4483.e7. [PMID: 37802051 PMCID: PMC10665086 DOI: 10.1016/j.cub.2023.09.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/27/2023] [Accepted: 09/13/2023] [Indexed: 10/08/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative-choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in the auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that, in spite of modulation of auditory cortical activity by task priors, the auditory cortex does not represent sufficient information about these priors to exploit them optimally. Thus, the combination of rapidly changing sensory information with more slowly varying task information required for decisions in this task might be represented in brain regions other than the auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA.
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
10
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.31.526457. [PMID: 37745428 PMCID: PMC10515783 DOI: 10.1101/2023.01.31.526457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that in spite of modulation of auditory cortical activity by task priors, auditory cortex does not represent sufficient information about these priors to exploit them optimally and that decisions in this task require that rapidly changing sensory information be combined with more slowly varying task information extracted and represented in brain regions other than auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: Institute for Quantitative Biosciences, the University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 1130032, Japan
- Present address: Department of Life Sciences, Graduate School of Arts and Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, 1538902, Japan
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: The Francis Crick Institute, 1 Midland Rd, NW1 4AT London, UK
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
11
|
Mischler G, Raghavan V, Keshishian M, Mesgarani N. naplib-python: Neural acoustic data processing and analysis tools in python. SOFTWARE IMPACTS 2023; 17:100541. [PMID: 37771949 PMCID: PMC10538526 DOI: 10.1016/j.simpa.2023.100541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field. In the interest of unifying the domain of auditory neuroscience, naplib-python provides an intuitive and general data structure for handling all neural recordings and stimuli, as well as extensive preprocessing, feature extraction, and analysis tools which operate on that data structure. The package removes many of the complications associated with this domain, such as varying trial durations and multi-modal stimuli, and provides a general-purpose analysis framework that interfaces easily with existing toolboxes used in the field.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, NY, United States
- Department of Electrical Engineering, Columbia University, NY, United States
| | - Vinay Raghavan
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, NY, United States
- Department of Electrical Engineering, Columbia University, NY, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, NY, United States
- Department of Electrical Engineering, Columbia University, NY, United States
| | - Nima Mesgarani
- Corresponding author at: Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, NY, United States. (N. Mesgarani)
| |
Collapse
|
12
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
13
|
Leclère T, Johannesen PT, Wijetillake A, Segovia-Martínez M, Lopez-Poveda EA. A computational modelling framework for assessing information transmission with cochlear implants. Hear Res 2023; 432:108744. [PMID: 37004271 DOI: 10.1016/j.heares.2023.108744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 03/05/2023] [Accepted: 03/24/2023] [Indexed: 03/28/2023]
Abstract
Computational models are useful tools to investigate scientific questions that would be complicated to address using an experimental approach. In the context of cochlear-implants (CIs), being able to simulate the neural activity evoked by these devices could help in understanding their limitations to provide natural hearing. Here, we present a computational modelling framework to quantify the transmission of information from sound to spikes in the auditory nerve of a CI user. The framework includes a model to simulate the electrical current waveform sensed by each auditory nerve fiber (electrode-neuron interface), followed by a model to simulate the timing at which a nerve fiber spikes in response to a current waveform (auditory nerve fiber model). Information theory is then applied to determine the amount of information transmitted from a suitable reference signal (e.g., the acoustic stimulus) to a simulated population of auditory nerve fibers. As a use case example, the framework is applied to simulate published data on modulation detection by CI users obtained using direct stimulation via a single electrode. Current spread as well as the number of fibers were varied independently to illustrate the framework capabilities. Simulations reasonably matched experimental data and suggested that the encoded modulation information is proportional to the total neural response. They also suggested that amplitude modulation is well encoded in the auditory nerve for modulation rates up to 1000 Hz and that the variability in modulation sensitivity across CI users is partly because different CI users use different references for detecting modulation.
Collapse
Affiliation(s)
- Thibaud Leclère
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Peter T Johannesen
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | | | | | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007, Spain.
| |
Collapse
|
14
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany,Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy,Corresponding author at: Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany.
| |
Collapse
|
15
|
Škorjanc A, Kreft M, Benda J. Stimulator compensation and generation of Gaussian noise stimuli with defined amplitude spectra for studying input–output relations of sensory systems. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2022; 209:361-372. [PMID: 36527489 DOI: 10.1007/s00359-022-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 12/23/2022]
Abstract
Gaussian noise is an important stimulus for the study of biological systems, especially sensory and neural systems. Since these systems are inherently nonlinear, the properties of the noise strongly influence the outcome of the analysis. Therefore, it is crucial to use a well-defined and controlled noise stimulus. In this paper, we first use the example of an insect filiform sensillum, a simple mechanoreceptor with a single sensory cell, to show that changes in the amplitude and spectral properties of the noise stimulus indeed affect the linear transfer function of the sensillum. We then explain step-by-step how to use the inverse fast Fourier transform to generate a Gaussian noise that has an arbitrary user-defined amplitude spectrum, including a band-limited white noise with a perfectly sharp cutoff edge. Finally, we demonstrate how such a perfect band-limited Gaussian white noise stimulus can also be generated with a non-perfect stimulator using a simple procedure that compensates for the filtering properties of the stimulator. With this approach, one can generate well-defined Gaussian noise stimuli that can be adapted to any application. For example, one can generate visual, sound, or vibrational stimuli for experimental research in visual physiology, auditory physiology, and biotremology, as well as inputs for testing various models in theoretical research.
Collapse
Affiliation(s)
- Aleš Škorjanc
- Department of Biology, Biotechnical Faculty, University of Ljubljana, Večna pot 111, 1000, Ljubljana, Slovenia.
| | - Marko Kreft
- Department of Biology, Biotechnical Faculty, University of Ljubljana, Večna pot 111, 1000, Ljubljana, Slovenia
- Institute of Pathophysiology, Faculty of Medicine, University of Ljubljana, Zaloška 4, 1000, Ljubljana, Slovenia
- Laboratory of Cell Engineering, Celica Biomedical, Tehnološki park 24, 1000, Ljubljana, Slovenia
| | - Jan Benda
- Institute for Neurobiology, Eberhard Karls Universität, 72076, Tübingen, Germany
| |
Collapse
|
16
|
Modeling temporal information encoding by the population of fibers in the healthy and synaptopathic auditory nerve. Hear Res 2022; 426:108621. [PMID: 36182814 DOI: 10.1016/j.heares.2022.108621] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/16/2022] [Accepted: 09/20/2022] [Indexed: 11/23/2022]
Abstract
We report a theoretical study aimed at investigating the impact of cochlear synapse loss (synaptopathy) on the encoding of the envelope (ENV) and temporal fine structure (TFS) of sounds by the population of auditory nerve fibers. A computational model was used to simulate auditory-nerve spike trains evoked by sinusoidally amplitude-modulated (AM) tones at 10 Hz with various carrier frequencies and levels. The model included 16 cochlear channels with characteristic frequencies (CFs) from 250 Hz to 8 kHz. Each channel was innervated by 3, 4 and 10 fibers with low (LSR), medium (MSR), and high spontaneous rates (HSR), respectively. For each channel, spike trains were collapsed into three separate 'population' post-stimulus time histograms (PSTHs), one per fiber type. Information theory was applied to reconstruct the stimulus waveform, ENV, and TFS from one or more PSTHs in a mathematically optimal way. The quality of the reconstruction was regarded as an estimate of the information present in the used PSTHs. Various synaptopathy scenarios were simulated by removing fibers of specific types and/or cochlear regions before stimulus reconstruction. We found that the TFS was predominantly encoded by HSR fibers at all stimulus carrier frequencies and levels. The encoding of the ENV was more complex. At lower levels, the ENV was predominantly encoded by HSR fibers with CFs near the stimulus carrier frequency. At higher levels, the ENV was equally well or better encoded by HSR fibers with CFs different from the AM carrier frequency as by LSR fibers with CFs at the carrier frequency. Altogether, findings suggest that a healthy population of HSR fibers (i.e., including fibers with CFs around and remote from the AM carrier frequency) might be sufficient to encode the ENV and TFS over a wide range of stimulus levels. Findings are discussed regarding their relevance for diagnosing synaptopathy using non-invasive ENV- and TFS-based measures.
Collapse
|
17
|
Chou KF, Boyd AD, Best V, Colburn HS, Sen K. A biologically oriented algorithm for spatial sound segregation. Front Neurosci 2022; 16:1004071. [PMID: 36312015 PMCID: PMC9614053 DOI: 10.3389/fnins.2022.1004071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/28/2022] [Indexed: 11/13/2022] Open
Abstract
Listening in an acoustically cluttered scene remains a difficult task for both machines and hearing-impaired listeners. Normal-hearing listeners accomplish this task with relative ease by segregating the scene into its constituent sound sources, then selecting and attending to a target source. An assistive listening device that mimics the biological mechanisms underlying this behavior may provide an effective solution for those with difficulty listening in acoustically cluttered environments (e.g., a cocktail party). Here, we present a binaural sound segregation algorithm based on a hierarchical network model of the auditory system. In the algorithm, binaural sound inputs first drive populations of neurons tuned to specific spatial locations and frequencies. The spiking response of neurons in the output layer are then reconstructed into audible waveforms via a novel reconstruction method. We evaluate the performance of the algorithm with a speech-on-speech intelligibility task in normal-hearing listeners. This two-microphone-input algorithm is shown to provide listeners with perceptual benefit similar to that of a 16-microphone acoustic beamformer. These results demonstrate the promise of this biologically inspired algorithm for enhancing selective listening in challenging multi-talker scenes.
Collapse
Affiliation(s)
- Kenny F. Chou
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Alexander D. Boyd
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - H. Steven Colburn
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| | - Kamal Sen
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
- *Correspondence: Kamal Sen,
| |
Collapse
|
18
|
Franke K, Willeke KF, Ponder K, Galdamez M, Zhou N, Muhammad T, Patel S, Froudarakis E, Reimer J, Sinz FH, Tolias AS. State-dependent pupil dilation rapidly shifts visual feature selectivity. Nature 2022; 610:128-134. [PMID: 36171291 PMCID: PMC10635574 DOI: 10.1038/s41586-022-05270-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 08/23/2022] [Indexed: 11/09/2022]
Abstract
To increase computational flexibility, the processing of sensory inputs changes with behavioural context. In the visual system, active behavioural states characterized by motor activity and pupil dilation1,2 enhance sensory responses, but typically leave the preferred stimuli of neurons unchanged2-9. Here we find that behavioural state also modulates stimulus selectivity in the mouse visual cortex in the context of coloured natural scenes. Using population imaging in behaving mice, pharmacology and deep neural network modelling, we identified a rapid shift in colour selectivity towards ultraviolet stimuli during an active behavioural state. This was exclusively caused by state-dependent pupil dilation, which resulted in a dynamic switch from rod to cone photoreceptors, thereby extending their role beyond night and day vision. The change in tuning facilitated the decoding of ethological stimuli, such as aerial predators against the twilight sky10. For decades, studies in neuroscience and cognitive science have used pupil dilation as an indirect measure of brain state. Our data suggest that, in addition, state-dependent pupil dilation itself tunes visual representations to behavioural demands by differentially recruiting rods and cones on fast timescales.
Collapse
Affiliation(s)
- Katrin Franke
- Institute for Ophthalmic Research, Tübingen University, Tübingen, Germany.
- Center for Integrative Neuroscience, Tübingen University, Tübingen, Germany.
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA.
| | - Konstantin F Willeke
- Institute for Bioinformatics and Medical Informatics, Tübingen University, Tübingen, Germany
- Department of Computer Science, Göttingen University, Göttingen, Germany
| | - Kayla Ponder
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Mario Galdamez
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Na Zhou
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Taliah Muhammad
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Saumil Patel
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
| | - Fabian H Sinz
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Institute for Bioinformatics and Medical Informatics, Tübingen University, Tübingen, Germany
- Department of Computer Science, Göttingen University, Göttingen, Germany
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
19
|
Levi A, Spivak L, Sloin HE, Someck S, Stark E. Error correction and improved precision of spike timing in converging cortical networks. Cell Rep 2022; 40:111383. [PMID: 36130516 PMCID: PMC9513803 DOI: 10.1016/j.celrep.2022.111383] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/26/2022] [Accepted: 08/28/2022] [Indexed: 11/20/2022] Open
Abstract
The brain propagates neuronal signals accurately and rapidly. Nevertheless, whether and how a pool of cortical neurons transmits an undistorted message to a target remains unclear. We apply optogenetic white noise signals to small assemblies of cortical pyramidal cells (PYRs) in freely moving mice. The directly activated PYRs exhibit a spike timing precision of several milliseconds. Instead of losing precision, interneurons driven via synaptic activation exhibit higher precision with respect to the white noise signal. Compared with directly activated PYRs, postsynaptic interneuron spike trains allow better signal reconstruction, demonstrating error correction. Data-driven modeling shows that nonlinear amplification of coincident spikes can generate error correction and improved precision. Over multiple applications of the same signal, postsynaptic interneuron spiking is most reliable at timescales ten times shorter than those of the presynaptic PYR, exhibiting temporal coding. Similar results are observed in hippocampal region CA1. Coincidence detection of convergent inputs enables messages to be precisely propagated between cortical PYRs and interneurons. PYR-to-interneuron spike transmission exhibits error correction and improved precision Interneuron precision is higher when a larger pool of presynaptic PYRs is recruited Error correction and improved precision are consistent with coincidence detection Interneurons activated by synaptic transmission act as temporal coders
Collapse
Affiliation(s)
- Amir Levi
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Lidor Spivak
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Hadas E Sloin
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Shirly Someck
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Eran Stark
- Sagol School of Neuroscience and Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel.
| |
Collapse
|
20
|
Hernández DG, Sober SJ, Nemenman I. Unsupervised Bayesian Ising Approximation for decoding neural activity and other biological dictionaries. eLife 2022; 11:68192. [PMID: 35315769 PMCID: PMC8989415 DOI: 10.7554/elife.68192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
Collapse
Affiliation(s)
- Damián G Hernández
- Department of Medical Physics, Centro Atómico Bariloche and Instituto Balseiro, Bariloche, Argentina
| | - Samuel J Sober
- Department of Biology, Emory University, Atlanta, United States
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, United States
| |
Collapse
|
21
|
Marino J. Predictive Coding, Variational Autoencoders, and Biological Connections. Neural Comput 2021; 34:1-44. [PMID: 34758480 DOI: 10.1162/neco_a_01458] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 08/14/2021] [Indexed: 11/04/2022]
Abstract
We present a review of predictive coding, from theoretical neuroscience, and variational autoencoders, from machine learning, identifying the common origin and mathematical framework underlying both areas. As each area is prominent within its respective field, more firmly connecting these areas could prove useful in the dialogue between neuroscience and machine learning. After reviewing each area, we discuss two possible correspondences implied by this perspective: cortical pyramidal dendrites as analogous to (nonlinear) deep networks and lateral inhibition as analogous to normalizing flows. These connections may provide new directions for further investigations in each field.
Collapse
Affiliation(s)
- Joseph Marino
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, U.S.A.
| |
Collapse
|
22
|
Predictive encoding of motion begins in the primate retina. Nat Neurosci 2021; 24:1280-1291. [PMID: 34341586 PMCID: PMC8728393 DOI: 10.1038/s41593-021-00899-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 06/25/2021] [Indexed: 02/06/2023]
Abstract
Predictive motion encoding is an important aspect of visually guided behavior that allows animals to estimate the trajectory of moving objects. Motion prediction is understood primarily in the context of translational motion, but the environment contains other types of behaviorally salient motion correlation such as those produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these correlations remain unclear. We report here that four of the parallel output pathways in the primate retina encode predictive motion information, and this encoding occurs for several classes of spatiotemporal correlation that are found in natural vision. Such predictive coding can be explained by known nonlinear circuit mechanisms that produce a nearly optimal encoding, with transmitted information approaching the theoretical limit imposed by the stimulus itself. Thus, these neural circuit mechanisms efficiently separate predictive information from nonpredictive information during the encoding process.
Collapse
|
23
|
Williams E, Payeur A, Gidon A, Naud R. Neural burst codes disguised as rate codes. Sci Rep 2021; 11:15910. [PMID: 34354118 PMCID: PMC8342467 DOI: 10.1038/s41598-021-95037-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 07/13/2021] [Indexed: 02/07/2023] Open
Abstract
The burst coding hypothesis posits that the occurrence of sudden high-frequency patterns of action potentials constitutes a salient syllable of the neural code. Many neurons, however, do not produce clearly demarcated bursts, an observation invoked to rule out the pervasiveness of this coding scheme across brain areas and cell types. Here we ask how detrimental ambiguous spike patterns, those that are neither clearly bursts nor isolated spikes, are for neuronal information transfer. We addressed this question using information theory and computational simulations. By quantifying how information transmission depends on firing statistics, we found that the information transmitted is not strongly influenced by the presence of clearly demarcated modes in the interspike interval distribution, a feature often used to identify the presence of burst coding. Instead, we found that neurons having unimodal interval distributions were still able to ascribe different meanings to bursts and isolated spikes. In this regime, information transmission depends on dynamical properties of the synapses as well as the length and relative frequency of bursts. Furthermore, we found that common metrics used to quantify burstiness were unable to predict the degree with which bursts could be used to carry information. Our results provide guiding principles for the implementation of coding strategies based on spike-timing patterns, and show that even unimodal firing statistics can be consistent with a bivariate neural code.
Collapse
Affiliation(s)
- Ezekiel Williams
- grid.28046.380000 0001 2182 2255Department of Mathematics and Statistics, University of Ottawa, 150 Louis Pasteur, Ottawa, K1N 6N5 Canada
| | - Alexandre Payeur
- grid.28046.380000 0001 2182 2255University of Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, 451 Smyth Rd., Ottawa, K1H 8M5 Canada
| | - Albert Gidon
- grid.7468.d0000 0001 2248 7639Institute for Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Richard Naud
- grid.28046.380000 0001 2182 2255University of Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, 451 Smyth Rd., Ottawa, K1H 8M5 Canada ,grid.28046.380000 0001 2182 2255Department of Physics, University of Ottawa, 150 Louis Pasteur, Ottawa, K1N 6N5 Canada
| |
Collapse
|
24
|
Zbili M, Rama S. A Quick and Easy Way to Estimate Entropy and Mutual Information for Neuroscience. Front Neuroinform 2021; 15:596443. [PMID: 34211385 PMCID: PMC8239197 DOI: 10.3389/fninf.2021.596443] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 05/12/2021] [Indexed: 11/24/2022] Open
Abstract
Calculations of entropy of a signal or mutual information between two variables are valuable analytical tools in the field of neuroscience. They can be applied to all types of data, capture non-linear interactions and are model independent. Yet the limited size and number of recordings one can collect in a series of experiments makes their calculation highly prone to sampling bias. Mathematical methods to overcome this so-called "sampling disaster" exist, but require significant expertise, great time and computational costs. As such, there is a need for a simple, unbiased and computationally efficient tool for estimating the level of entropy and mutual information. In this article, we propose that application of entropy-encoding compression algorithms widely used in text and image compression fulfill these requirements. By simply saving the signal in PNG picture format and measuring the size of the file on the hard drive, we can estimate entropy changes through different conditions. Furthermore, with some simple modifications of the PNG file, we can also estimate the evolution of mutual information between a stimulus and the observed responses through different conditions. We first demonstrate the applicability of this method using white-noise-like signals. Then, while this method can be used in all kind of experimental conditions, we provide examples of its application in patch-clamp recordings, detection of place cells and histological data. Although this method does not give an absolute value of entropy or mutual information, it is mathematically correct, and its simplicity and broad use make it a powerful tool for their estimation through experiments.
Collapse
Affiliation(s)
- Mickael Zbili
- Lyon Neuroscience Research Center (CRNL), Inserm U1028, CNRS UMR 5292, Université Claude Bernard Lyon1, Bron, France
| | - Sylvain Rama
- Laboratory of Synaptic Imaging, Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
25
|
Levy WB, Calvert VG. Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Proc Natl Acad Sci U S A 2021; 118:e2008173118. [PMID: 33906943 PMCID: PMC8106317 DOI: 10.1073/pnas.2008173118] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843-1864 (1960)] and [J. Sawada, D. S. Modha, "Synapse: Scalable energy-efficient neurosynaptic computing" in Application of Concurrency to System Design (ACSD) (2013), pp. 14-15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a [Formula: see text]-fold discrepancy between biological and lowest possible values of a neuron's computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).
Collapse
Affiliation(s)
- William B Levy
- Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908;
- Department of Psychology, University of Virginia, Charlottesville, VA 22904
| | - Victoria G Calvert
- College of Arts and Sciences, University of Virginia, Charlottesville, VA 22903
| |
Collapse
|
26
|
WOLIF: An efficiently tuned classifier that learns to classify non-linear temporal patterns without hidden layers. APPL INTELL 2021. [DOI: 10.1007/s10489-020-01934-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
27
|
Laturnus S, Hoffmann A, Chakrabarti S, Schwarz C. Functional analysis of information rates conveyed by rat whisker-related trigeminal nuclei neurons. J Neurophysiol 2021; 125:1517-1531. [PMID: 33689491 DOI: 10.1152/jn.00350.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The rat whisker system connects the tactile environment with the somatosensory thalamocortical system using only two synaptic stages. Encoding properties of the first stage, the primary afferents with somas in the trigeminal ganglion (TG), has been well studied, whereas much less is known from the second stage, the brainstem trigeminal nuclei (TN). The TN are a computational hub giving rise to parallel ascending tactile pathways and receiving feedback from many brain sites. We asked the question, whether encoding properties of TG neurons are kept by two trigeminal nuclei, the principalis (Pr5) and the spinalis interpolaris (Sp5i), respectively giving rise to two "lemniscal" and two "nonlemniscal" pathways. Single units were recorded in anesthetized rats while a single whisker was deflected on a band-limited white noise trajectory. Using information theoretic methods and spike-triggered mixture models (STM), we found that both nuclei encode the stimulus locally in time, i.e., stimulus features more than 10 ms in the past do not significantly influence spike generation. They further encode stimulus kinematics in multiple, distinct response fields, indicating encoding characteristics beyond previously described directional responses. Compared with TG, Pr5 and Sp5i gave rise to lower spike and information rates, but information rate per spike was on par with TG. Importantly, both brainstem nuclei were found to largely keep encoding properties of primary afferents, i.e. local encoding and kinematic response fields. The preservation of encoding properties in channels assumed to serve different functions seems surprising. We discuss the possibility that it might reflect specific constraints of frictional whisker contact with object surfaces.NEW & NOTEWORTHY We studied two trigeminal nuclei containing the second neuron on the tactile pathway of whisker-related tactile information in rats. We found that the subnuclei, traditionally assumed to give rise to functional tactile channels, nevertheless transfer primary afferent information with quite similar properties in terms of integration time and kinematic profile. We discuss whether such commonality may be due the requirement to adapt to physical constraints of frictional whisker contact.
Collapse
Affiliation(s)
- Sophie Laturnus
- Systems Neuroscience, Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls University, Tübingen, Germany.,Graduate Training Center for Neuroscience, Eberhard Karls University, Tübingen, Germany
| | - Adrian Hoffmann
- Systems Neuroscience, Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls University, Tübingen, Germany.,Graduate Training Center for Neuroscience, Eberhard Karls University, Tübingen, Germany
| | - Shubhodeep Chakrabarti
- Systems Neuroscience, Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls University, Tübingen, Germany.,Hertie Institute for Clinical Brain Research, Eberhard Karls University, Tübingen, Germany
| | - Cornelius Schwarz
- Systems Neuroscience, Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls University, Tübingen, Germany.,Hertie Institute for Clinical Brain Research, Eberhard Karls University, Tübingen, Germany
| |
Collapse
|
28
|
Pregowska A. Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels. ENTROPY 2021; 23:e23010092. [PMID: 33435243 PMCID: PMC7826906 DOI: 10.3390/e23010092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/26/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022]
Abstract
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106 Warsaw, Poland
| |
Collapse
|
29
|
Brackbill N, Rhoades C, Kling A, Shah NP, Sher A, Litke AM, Chichilnisky EJ. Reconstruction of natural images from responses of primate retinal ganglion cells. eLife 2020; 9:e58516. [PMID: 33146609 PMCID: PMC7752138 DOI: 10.7554/elife.58516] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 11/02/2020] [Indexed: 11/23/2022] Open
Abstract
The visual message conveyed by a retinal ganglion cell (RGC) is often summarized by its spatial receptive field, but in principle also depends on the responses of other RGCs and natural image statistics. This possibility was explored by linear reconstruction of natural images from responses of the four numerically-dominant macaque RGC types. Reconstructions were highly consistent across retinas. The optimal reconstruction filter for each RGC - its visual message - reflected natural image statistics, and resembled the receptive field only when nearby, same-type cells were included. ON and OFF cells conveyed largely independent, complementary representations, and parasol and midget cells conveyed distinct features. Correlated activity and nonlinearities had statistically significant but minor effects on reconstruction. Simulated reconstructions, using linear-nonlinear cascade models of RGC light responses that incorporated measured spatial properties and nonlinearities, produced similar results. Spatiotemporal reconstructions exhibited similar spatial properties, suggesting that the results are relevant for natural vision.
Collapse
Affiliation(s)
- Nora Brackbill
- Department of Physics, Stanford UniversityStanfordUnited States
| | - Colleen Rhoades
- Department of Bioengineering, Stanford UniversityStanfordUnited States
| | - Alexandra Kling
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford UniversityStanfordUnited States
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - EJ Chichilnisky
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| |
Collapse
|
30
|
Modeling a population of retinal ganglion cells with restricted Boltzmann machines. Sci Rep 2020; 10:16549. [PMID: 33024225 PMCID: PMC7538558 DOI: 10.1038/s41598-020-73691-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 09/17/2020] [Indexed: 11/29/2022] Open
Abstract
The retina is a complex circuit of the central nervous system whose aim is to encode visual stimuli prior the higher order processing performed in the visual cortex. Due to the importance of its role, modeling the retina to advance in interpreting its spiking activity output is a well studied problem. In particular, it has been shown that latent variable models can be used to model the joint distribution of Retinal Ganglion Cells (RGCs). In this work, we validate the applicability of Restricted Boltzmann Machines to model the spiking activity responses of a large a population of RGCs recorded with high-resolution electrode arrays. In particular, we show that latent variables can encode modes in the RGC activity distribution that are closely related to the visual stimuli. In contrast to previous work, we further validate our findings by comparing results associated with recordings from retinas under normal and altered encoding conditions obtained by pharmacological manipulation. In these conditions, we observe that the model reflects well-known physiological behaviors of the retina. Finally, we show that we can also discover temporal patterns, associated with distinct dynamics of the stimuli.
Collapse
|
31
|
Ivans RC, Dahl SG, Cantley KD. A Model for R(t) Elements and R(t) -Based Spike-Timing-Dependent Plasticity With Basic Circuit Examples. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4206-4216. [PMID: 31869804 DOI: 10.1109/tnnls.2019.2952768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is a fundamental synaptic learning rule observed in biology that leads to numerous behavioral and cognitive outcomes. Emulating STDP in electronic spiking neural networks with high-density memristive synapses is, therefore, of significant interest. While one popular method involves pulse-shaping the spiking neuron output voltages, an alternative approach is outlined in this article. The proposed STDP implementation uses time-varying dynamic resistance [ R ( t )] elements to achieve local synaptic learning from spike-pair STDP, spike triplet STDP, and firing rates. The R ( t ) elements are connected to each neuron circuit, thereby maintaining synaptic density and leveraging voltage division as a means of altering synaptic weight (memristor voltage). Example R ( t ) elements with their corresponding behaviors are demonstrated through simulation. A three-input-two-output network using single-memristor synaptic connections and R ( t ) elements is also simulated. Network-level effects, such as nonspecific synaptic plasticity, are discussed. Finally, spatiotemporal pattern recognition (STPR) using R ( t ) elements is demonstrated in simulation.
Collapse
|
32
|
Tauste Campo A. Inferring neural information flow from spiking data. Comput Struct Biotechnol J 2020; 18:2699-2708. [PMID: 33101608 PMCID: PMC7548302 DOI: 10.1016/j.csbj.2020.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 09/05/2020] [Accepted: 09/07/2020] [Indexed: 01/02/2023] Open
Abstract
The brain can be regarded as an information processing system in which neurons store and propagate information about external stimuli and internal processes. Therefore, estimating interactions between neural activity at the cellular scale has significant implications in understanding how neuronal circuits encode and communicate information across brain areas to generate behavior. While the number of simultaneously recorded neurons is growing exponentially, current methods relying only on pairwise statistical dependencies still suffer from a number of conceptual and technical challenges that preclude experimental breakthroughs describing neural information flows. In this review, we examine the evolution of the field over the years, starting from descriptive statistics to model-based and model-free approaches. Then, we discuss in detail the Granger Causality framework, which includes many popular state-of-the-art methods and we highlight some of its limitations from a conceptual and practical estimation perspective. Finally, we discuss directions for future research, including the development of theoretical information flow models and the use of dimensionality reduction techniques to extract relevant interactions from large-scale recording datasets.
Collapse
Affiliation(s)
- Adrià Tauste Campo
- Centre for Brain and Cognition, Universitat Pompeu Fabra, Ramon Trias Fargas 25, 08018 Barcelona, Spain
| |
Collapse
|
33
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
34
|
Multicoding in neural information transfer suggested by mathematical analysis of the frequency-dependent synaptic plasticity in vivo. Sci Rep 2020; 10:13974. [PMID: 32811844 PMCID: PMC7435278 DOI: 10.1038/s41598-020-70876-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 08/04/2020] [Indexed: 11/29/2022] Open
Abstract
Two elements of neural information processing have primarily been proposed: firing rate and spike timing of neurons. In the case of synaptic plasticity, although spike-timing-dependent plasticity (STDP) depending on presynaptic and postsynaptic spike times had been considered the most common rule, recent studies have shown the inhibitory nature of the brain in vivo for precise spike timing, which is key to the STDP. Thus, the importance of the firing frequency in synaptic plasticity in vivo has been recognized again. However, little is understood about how the frequency-dependent synaptic plasticity (FDP) is regulated in vivo. Here, we focused on the presynaptic input pattern, the intracellular calcium decay time constants, and the background synaptic activity, which vary depending on neuron types and the anatomical and physiological environment in the brain. By analyzing a calcium-based model, we found that the synaptic weight differs depending on these factors characteristic in vivo, even if neurons receive the same input rate. This finding suggests the involvement of multifaceted factors other than input frequency in FDP and even neural coding in vivo.
Collapse
|
35
|
Cao R. New Labels for Old Ideas: Predictive Processing and the Interpretation of Neural Signals. ACTA ACUST UNITED AC 2020. [DOI: 10.1007/s13164-020-00481-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
36
|
Hussain I, Thounaojam DM. SpiFoG: an efficient supervised learning algorithm for the network of spiking neurons. Sci Rep 2020; 10:13122. [PMID: 32753645 PMCID: PMC7403331 DOI: 10.1038/s41598-020-70136-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/23/2020] [Indexed: 11/29/2022] Open
Abstract
There has been a lot of research on supervised learning in spiking neural network (SNN) for a couple of decades to improve computational efficiency. However, evolutionary algorithm based supervised learning for SNN has not been investigated thoroughly which is still in embryo stage. This paper introduce an efficient algorithm (SpiFoG) to train multilayer feed forward SNN in supervised manner that uses elitist floating point genetic algorithm with hybrid crossover. The evidence from neuroscience claims that the brain uses spike times with random synaptic delays for information processing. Therefore, leaky-integrate-and-fire spiking neuron is used in this research introducing random synaptic delays. The SpiFoG allows both excitatory and inhibitory neurons by allowing a mixture of positive and negative synaptic weights. In addition, random synaptic delays are also trained with synaptic weights in an efficient manner. Moreover, computational efficiency of SpiFoG was increased by reducing the total simulation time and increasing the time step since increasing time step within the total simulation time takes less iteration. The SpiFoG is benchmarked on Iris and WBC dataset drawn from the UCI machine learning repository and found better performance than state-of-the-art techniques.
Collapse
Affiliation(s)
- Irshed Hussain
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India.
| | - Dalton Meitei Thounaojam
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| |
Collapse
|
37
|
Putney J, Conn R, Sponberg S. Precise timing is ubiquitous, consistent, and coordinated across a comprehensive, spike-resolved flight motor program. Proc Natl Acad Sci U S A 2019; 116:26951-26960. [PMID: 31843904 PMCID: PMC6936677 DOI: 10.1073/pnas.1907513116] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Sequences of action potentials, or spikes, carry information in the number of spikes and their timing. Spike timing codes are critical in many sensory systems, but there is now growing evidence that millisecond-scale changes in timing also carry information in motor brain regions, descending decision-making circuits, and individual motor units. Across all of the many signals that control a behavior, how ubiquitous, consistent, and coordinated are spike timing codes? Assessing these open questions ideally involves recording across the whole motor program with spike-level resolution. To do this, we took advantage of the relatively few motor units controlling the wings of a hawk moth, Manduca sexta. We simultaneously recorded nearly every action potential from all major wing muscles and the resulting forces in tethered flight. We found that timing encodes more information about turning behavior than spike count in every motor unit, even though there is sufficient variation in count alone. Flight muscles vary broadly in function as well as in the number and timing of spikes. Nonetheless, each muscle with multiple spikes consistently blends spike timing and count information in a 3:1 ratio. Coding strategies are consistent. Finally, we assess the coordination of muscles using pairwise redundancy measured through interaction information. Surprisingly, not only are all muscle pairs coordinated, but all coordination is accomplished almost exclusively through spike timing, not spike count. Spike timing codes are ubiquitous, consistent, and essential for coordination.
Collapse
Affiliation(s)
- Joy Putney
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
- Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, GA 30332
| | - Rachel Conn
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
- Neuroscience Program, Emory University, Atlanta, GA 30322
| | - Simon Sponberg
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
- Graduate Program in Quantitative Biosciences, Georgia Institute of Technology, Atlanta, GA 30332
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
| |
Collapse
|
38
|
Nakajima M, Schmitt LI. Understanding the circuit basis of cognitive functions using mouse models. Neurosci Res 2019; 152:44-58. [PMID: 31857115 DOI: 10.1016/j.neures.2019.12.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2019] [Revised: 12/01/2019] [Accepted: 12/09/2019] [Indexed: 01/13/2023]
Abstract
Understanding how cognitive functions arise from computations occurring in the brain requires the ability to measure and perturb neural activity while the relevant circuits are engaged for specific cognitive processes. Rapid technical advances have led to the development of new approaches to transiently activate and suppress neuronal activity as well as to record simultaneously from hundreds to thousands of neurons across multiple brain regions during behavior. To realize the full potential of these approaches for understanding cognition, however, it is critical that behavioral conditions and stimuli are effectively designed to engage the relevant brain networks. Here, we highlight recent innovations that enable this combined approach. In particular, we focus on how to design behavioral experiments that leverage the ever-growing arsenal of technologies for controlling and measuring neural activity in order to understand cognitive functions.
Collapse
Affiliation(s)
- Miho Nakajima
- McGovern Institute for Brain Research and the Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - L Ian Schmitt
- McGovern Institute for Brain Research and the Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, United States; Center for Brain Science, RIKEN, Wako, Saitama, Japan.
| |
Collapse
|
39
|
Góis ZHTD, Tort ABL. Characterizing Speed Cells in the Rat Hippocampus. Cell Rep 2019; 25:1872-1884.e4. [PMID: 30428354 DOI: 10.1016/j.celrep.2018.10.054] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 08/15/2018] [Accepted: 10/12/2018] [Indexed: 12/20/2022] Open
Abstract
Spatial navigation relies on visual landmarks as well as on self-motion information. In familiar environments, both place and grid cells maintain their firing fields in darkness, suggesting that they continuously receive information about locomotion speed required for path integration. Consistently, "speed cells" have been previously identified in the hippocampal formation and characterized in detail in the medial entorhinal cortex. Here we investigated speed-correlated firing in the hippocampus. We show that CA1 has speed cells that are stable across contexts, position in space, and time. Moreover, their speed-correlated firing occurs within theta cycles, independently of theta frequency. Interestingly, a physiological classification of cell types reveals that all CA1 speed cells are inhibitory. In fact, while speed modulates pyramidal cell activity, only the firing rate of interneurons can accurately predict locomotion speed on a sub-second timescale. These findings shed light on network models of navigation.
Collapse
Affiliation(s)
- Zé Henrique T D Góis
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN 59056-450, Brazil
| | - Adriano B L Tort
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN 59056-450, Brazil.
| |
Collapse
|
40
|
Fu Z, Wu X, Chen J. Congruent audiovisual speech enhances auditory attention decoding with EEG. J Neural Eng 2019; 16:066033. [PMID: 31505476 DOI: 10.1088/1741-2552/ab4340] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The auditory attention decoding (AAD) approach can be used to determine the identity of the attended speaker during an auditory selective attention task, by analyzing measurements of electroencephalography (EEG) data. The AAD approach has the potential to guide the design of speech enhancement algorithms in hearing aids, i.e. to identify the speech stream of listener's interest so that hearing aids algorithms can amplify the target speech and attenuate other distracting sounds. This would consequently result in improved speech understanding and communication and reduced cognitive load, etc. The present work aimed to investigate whether additional visual input (i.e. lipreading) would enhance the AAD performance for normal-hearing listeners. APPROACH In a two-talker scenario, where auditory stimuli of audiobooks narrated by two speakers were presented, multi-channel EEG signals were recorded while participants were selectively attending to one speaker and ignoring the other one. Speakers' mouth movements were recorded during narrating for providing visual stimuli. Stimulus conditions included audio-only, visual input congruent with either (i.e. attended or unattended) speaker, and visual input incongruent with either speaker. The AAD approach was performed separately for each condition to evaluate the effect of additional visual input on AAD. MAIN RESULTS Relative to the audio-only condition, the AAD performance was found improved by visual input only when it was congruent with the attended speech stream, and the improvement was about 14 percentage points on decoding accuracy. Cortical envelope tracking activities in both auditory and visual cortex were demonstrated stronger for the congruent audiovisual speech condition than other conditions. In addition, a higher AAD robustness was revealed for the congruent audiovisual condition, with reduced channel number and trial duration achieving higher accuracy than the audio-only condition. SIGNIFICANCE The present work complements previous studies and further manifests the feasibility of the AAD-guided design of hearing aids in daily face-to-face conversations. The present work also has a directive significance for designing a low-density EEG setup for the AAD approach.
Collapse
Affiliation(s)
- Zhen Fu
- Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, People's Republic of China
| | | | | |
Collapse
|
41
|
Baudot P. Elements of qualitative cognition: An information topology perspective. Phys Life Rev 2019; 31:263-275. [PMID: 31679788 DOI: 10.1016/j.plrev.2019.10.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 10/16/2019] [Indexed: 11/29/2022]
Abstract
Elementary quantitative and qualitative aspects of consciousness are investigated conjointly from the biology, neuroscience, physic and mathematic point of view, by the mean of a theory written with Bennequin that derives and extends information theory within algebraic topology. Information structures, that accounts for statistical dependencies within n-body interacting systems are interpreted a la Leibniz as a monadic-panpsychic framework where consciousness is information and physical, and arise from collective interactions. The electrodynamic intrinsic nature of consciousness, sustained by an analogical code, is illustrated by standard neuroscience and psychophysic results. It accounts for the diversity of the learning mechanisms, including adaptive and homeostatic processes on multiple scales, and details their expression within information theory. The axiomatization and logic of cognition are rooted in measure theory expressed within a topos intrinsic probabilistic constructive logic. Information topology provides a synthesis of the main models of consciousness (Neural Assemblies, Integrated Information, Global Neuronal Workspace, Free Energy Principle) within a formal Gestalt theory, an expression of information structures and patterns in correspondence with Galois cohomology and discrete symmetries. The methods provide new formalization of deep neural network with homologicaly imposed architecture applied to challenges in AI-machine learning.
Collapse
Affiliation(s)
- Pierre Baudot
- Median Technologies, Valbonne, France; Inserm UNIS UMR1072, Université Aix-Marseille AMU, Marseille, France.
| |
Collapse
|
42
|
Gjorgjieva J, Meister M, Sompolinsky H. Functional diversity among sensory neurons from efficient coding principles. PLoS Comput Biol 2019; 15:e1007476. [PMID: 31725714 PMCID: PMC6890262 DOI: 10.1371/journal.pcbi.1007476] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 12/03/2019] [Accepted: 10/10/2019] [Indexed: 01/10/2023] Open
Abstract
In many sensory systems the neural signal is coded by the coordinated response of heterogeneous populations of neurons. What computational benefit does this diversity confer on information processing? We derive an efficient coding framework assuming that neurons have evolved to communicate signals optimally given natural stimulus statistics and metabolic constraints. Incorporating nonlinearities and realistic noise, we study optimal population coding of the same sensory variable using two measures: maximizing the mutual information between stimuli and responses, and minimizing the error incurred by the optimal linear decoder of responses. Our theory is applied to a commonly observed splitting of sensory neurons into ON and OFF that signal stimulus increases or decreases, and to populations of monotonically increasing responses of the same type, ON. Depending on the optimality measure, we make different predictions about how to optimally split a population into ON and OFF, and how to allocate the firing thresholds of individual neurons given realistic stimulus distributions and noise, which accord with certain biases observed experimentally.
Collapse
Affiliation(s)
| | - Markus Meister
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- The Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| |
Collapse
|
43
|
Feedforward Thalamocortical Connectivity Preserves Stimulus Timing Information in Sensory Pathways. J Neurosci 2019; 39:7674-7688. [PMID: 31270157 DOI: 10.1523/jneurosci.3165-17.2019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 03/26/2019] [Accepted: 05/10/2019] [Indexed: 11/21/2022] Open
Abstract
Reliable timing of cortical spikes in response to visual events is crucial in representing visual inputs to the brain. Spikes in the primary visual cortex (V1) need to occur at the same time within a repeated visual stimulus. Two classical mechanisms are employed by the cortex to enhance reliable timing. First, cortical neurons respond reliably to a restricted set of stimuli through their preference for certain patterns of membrane potential due to their intrinsic properties. Second, intracortical networking of excitatory and inhibitory neurons induces lateral inhibition that, through the timing and strength of IPSCs and EPSCs, produces sparse and reliably timed cortical neuron spike trains to be transmitted downstream. Here, we describe a third mechanism that, through preferential thalamocortical synaptic connectivity, enhances the trial-to-trial timing precision of cortical spikes in the presence of spike train variability within each trial that is introduced between LGN neurons in the retino-thalamic pathway. Applying experimentally recorded LGN spike trains from the anesthetized cat to a detailed model of a spiny stellate V1 neuron, we found that output spike timing precision improved with increasing numbers of convergent LGN inputs. The improvement was consistent with the predicted proportionality of [Formula: see text] for n LGN source neurons. We also found connectivity configurations that maximize reliability and that generate V1 cell output spike trains quantitatively similar to the experimental recordings. Our findings suggest a general principle, namely intra-trial variability among converging inputs, that increases stimulus response precision and is widely applicable to synaptically connected spiking neurons.SIGNIFICANCE STATEMENT The early visual pathway of the cat is favorable for studying the effects of trial-to-trial variability of synaptic inputs and intra-trial variability of thalamocortical connectivity on information transmission into the visual cortex. We have used a detailed model to show that there are preferred combinations of the number of thalamic afferents and the number of synapses per afferent that maximize the output reliability and spike-timing precision of cortical neurons. This provides additional insights into how synchrony in thalamic spike trains can reduce trial-to-trial variability to produce highly reliable reporting of sensory events to the cortex. The same principles may apply to other converging pathways where temporally jittered spike trains can reliably drive the downstream neuron and improve temporal precision.
Collapse
|
44
|
Abstract
This work makes 2 contributions. First, we present a neural network model of associative memory that stores and retrieves sparse patterns of complex variables. This network can store analog information as fixed-point attractors in the complex domain; it is governed by an energy function and has increased memory capacity compared to early models. Second, we translate complex attractor networks into spiking networks, where the timing of the spike indicates the phase of a complex number. We show that complex fixed points correspond to stable periodic spike patterns. It is demonstrated that such networks can be constructed with resonate-and-fire or integrate-and-fire neurons with biologically plausible mechanisms and be used for robust computations, such as image retrieval. Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices.
Collapse
|
45
|
A Physiologically Inspired Model for Solving the Cocktail Party Problem. J Assoc Res Otolaryngol 2019; 20:579-593. [PMID: 31392449 PMCID: PMC6889086 DOI: 10.1007/s10162-019-00732-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 07/18/2019] [Indexed: 11/05/2022] Open
Abstract
At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an “attended” target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.
Collapse
|
46
|
Pregowska A, Casti A, Kaplan E, Wajnryb E, Szczepanski J. Information processing in the LGN: a comparison of neural codes and cell types. BIOLOGICAL CYBERNETICS 2019; 113:453-464. [PMID: 31243531 PMCID: PMC6658673 DOI: 10.1007/s00422-019-00801-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/17/2019] [Indexed: 06/09/2023]
Abstract
To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate , which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for X-ON and X-OFF cells. We found that, for X-ON cells the Firing Rate and Information Rate often behave in a completely different way, while for X-OFF cells these rates are much more highly correlated. Our results suggest that for X-ON cells a more efficient "temporal code" is employed, while for X-OFF cells a straightforward "rate code" is used, which is more reliable and is correlated with energy consumption.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| | - Alex Casti
- Department of Mathematics, Gildart-Haase School of Computer Sciences and Engineering, Fairleigh Dickinson University, Teaneck, NY 07666 USA
| | - Ehud Kaplan
- Icahn School of Medicine at Mount Sinai, New York, NY 10029 USA
- National Institute of Mental Health (NUDZ), Topolova 748, 250 67 Klecany, Czech Republic
- Department of Philosophy of Science, Charles University, Prague, Czech Republic
| | - Eligiusz Wajnryb
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| | - Janusz Szczepanski
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02–106 Warsaw, Poland
| |
Collapse
|
47
|
Gleiss H, Encke J, Lingner A, Jennings TR, Brosel S, Kunz L, Grothe B, Pecka M. Cooperative population coding facilitates efficient sound-source separability by adaptation to input statistics. PLoS Biol 2019; 17:e3000150. [PMID: 31356637 PMCID: PMC6687189 DOI: 10.1371/journal.pbio.3000150] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 08/08/2019] [Accepted: 07/11/2019] [Indexed: 01/31/2023] Open
Abstract
Our sensory environment changes constantly. Accordingly, neural systems continually adapt to the concurrent stimulus statistics to remain sensitive over a wide range of conditions. Such dynamic range adaptation (DRA) is assumed to increase both the effectiveness of the neuronal code and perceptual sensitivity. However, direct demonstrations of DRA-based efficient neuronal processing that also produces perceptual benefits are lacking. Here, we investigated the impact of DRA on spatial coding in the rodent brain and the perception of human listeners. Complex spatial stimulation with dynamically changing source locations elicited prominent DRA already on the initial spatial processing stage, the Lateral Superior Olive (LSO) of gerbils. Surprisingly, on the level of individual neurons, DRA diminished spatial tuning because of large response variability across trials. However, when considering single-trial population averages of multiple neurons, DRA enhanced the coding efficiency specifically for the concurrently most probable source locations. Intrinsic LSO population imaging of energy consumption combined with pharmacology revealed that a slow-acting LSO gain-control mechanism distributes activity across a group of neurons during DRA, thereby enhancing population coding efficiency. Strikingly, such “efficient cooperative coding” also improved neuronal source separability specifically for the locations that were most likely to occur. These location-specific enhancements in neuronal coding were paralleled by human listeners exhibiting a selective improvement in spatial resolution. We conclude that, contrary to canonical models of sensory encoding, the primary motive of early spatial processing is efficiency optimization of neural populations for enhanced source separability in the concurrent environment. The efficient coding hypothesis suggests that sensory processing adapts to the stimulus statistics to maximize information while minimizing energetic costs. This study finds that an auditory spatial processing circuit distributes activity across neurons to enhance processing efficiency, focally improving spatial resolution both in neurons and in human listeners.
Collapse
Affiliation(s)
- Helge Gleiss
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Jörg Encke
- Chair of Bio-Inspired Information Processing, Department of Electrical and Computer Engineering, Technical University of Munich, Garching, Germany
| | - Andrea Lingner
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Todd R. Jennings
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Sonja Brosel
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Lars Kunz
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
- * E-mail:
| |
Collapse
|
48
|
Pruszynski JA, Zylberberg J. The language of the brain: real-world neural population codes. Curr Opin Neurobiol 2019; 58:30-36. [PMID: 31326721 DOI: 10.1016/j.conb.2019.06.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 11/29/2022]
Affiliation(s)
- J Andrew Pruszynski
- Department of Physiology and Pharmacology, Western University, London, ON, Canada; Department of Psychology, Western University, London, ON, Canada; Robarts Research Institute, London, ON, Canada
| | - Joel Zylberberg
- Center for Vision Research, York University, Toronto, ON, Canada; Department of Physics and Astronomy, York University, Toronto, ON, Canada; Canadian Institute for Advanced Research, Toronto, ON, Canada.
| |
Collapse
|
49
|
Venugopal S, Seki S, Terman DH, Pantazis A, Olcese R, Wiedau-Pazos M, Chandler SH. Resurgent Na+ Current Offers Noise Modulation in Bursting Neurons. PLoS Comput Biol 2019; 15:e1007154. [PMID: 31226124 PMCID: PMC6608983 DOI: 10.1371/journal.pcbi.1007154] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 07/03/2019] [Accepted: 06/04/2019] [Indexed: 01/20/2023] Open
Abstract
Neurons utilize bursts of action potentials as an efficient and reliable way to encode information. It is likely that the intrinsic membrane properties of neurons involved in burst generation may also participate in preserving its temporal features. Here we examined the contribution of the persistent and resurgent components of voltage-gated Na+ currents in modulating the burst discharge in sensory neurons. Using mathematical modeling, theory and dynamic-clamp electrophysiology, we show that, distinct from the persistent Na+ component which is important for membrane resonance and burst generation, the resurgent Na+ can help stabilize burst timing features including the duration and intervals. Moreover, such a physiological role for the resurgent Na+ offered noise tolerance and preserved the regularity of burst patterns. Model analysis further predicted a negative feedback loop between the persistent and resurgent gating variables which mediate such gain in burst stability. These results highlight a novel role for the voltage-gated resurgent Na+ component in moderating the entropy of burst-encoded neural information.
Collapse
Affiliation(s)
- Sharmila Venugopal
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Soju Seki
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA, United States of America
| | - David H Terman
- Department of Mathematics, The Ohio State University, Columbus, OH, United States of America
| | - Antonios Pantazis
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States of America.,Division of Neurobiology Department of Clinical and Experimental Medicine (IKE) and Wallenberg Center for Molecular Medicine Linköping University 581 83 Linköping Sweden
| | - Riccardo Olcese
- Department of Anesthesiology & Perioperative Medicine, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Martina Wiedau-Pazos
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Scott H Chandler
- Department of Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
50
|
Adaptation of the human auditory cortex to changing background noise. Nat Commun 2019; 10:2509. [PMID: 31175304 PMCID: PMC6555798 DOI: 10.1038/s41467-019-10611-4] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 05/21/2019] [Indexed: 11/09/2022] Open
Abstract
Speech communication in real-world environments requires adaptation to changing acoustic conditions. How the human auditory cortex adapts as a new noise source appears in or disappears from the acoustic scene remain unclear. Here, we directly measured neural activity in the auditory cortex of six human subjects as they listened to speech with abruptly changing background noises. We report rapid and selective suppression of acoustic features of noise in the neural responses. This suppression results in enhanced representation and perception of speech acoustic features. The degree of adaptation to different background noises varies across neural sites and is predictable from the tuning properties and speech specificity of the sites. Moreover, adaptation to background noise is unaffected by the attentional focus of the listener. The convergence of these neural and perceptual effects reveals the intrinsic dynamic mechanisms that enable a listener to filter out irrelevant sound sources in a changing acoustic scene.
Collapse
|