1
|
Leong ATL, Wong EC, Wang X, Wu EX. Hippocampus Modulates Vocalizations Responses at Early Auditory Centers. Neuroimage 2023; 270:119943. [PMID: 36828157 DOI: 10.1016/j.neuroimage.2023.119943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 02/13/2023] [Indexed: 02/24/2023] Open
Abstract
Despite its prominence in learning and memory, hippocampal influence in early auditory processing centers remains unknown. Here, we examined how hippocampal activity modulates sound-evoked responses in the auditory midbrain and thalamus using optogenetics and functional MRI (fMRI) in rodents. Ventral hippocampus (vHP) excitatory neuron stimulation at 5 Hz evoked robust hippocampal activity that propagates to the primary auditory cortex. We then tested 5 Hz vHP stimulation paired with either natural vocalizations or artificial/noise acoustic stimuli. vHP stimulation enhanced auditory responses to vocalizations (with a negative or positive valence) in the inferior colliculus, medial geniculate body, and auditory cortex, but not to their temporally reversed counterparts (artificial sounds) or broadband noise. Meanwhile, pharmacological vHP inactivation diminished response selectivity to vocalizations. These results directly reveal the large-scale hippocampal participation in natural sound processing at early centers of the ascending auditory pathway. They expand our present understanding of hippocampus in global auditory networks.
Collapse
Affiliation(s)
- Alex T L Leong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| | - Eddie C Wong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Xunda Wang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; School of Biomedical Sciences, LKS Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
2
|
Hamilton LS, Oganian Y, Hall J, Chang EF. Parallel and distributed encoding of speech across human auditory cortex. Cell 2021; 184:4626-4639.e13. [PMID: 34411517 PMCID: PMC8456481 DOI: 10.1016/j.cell.2021.07.019] [Citation(s) in RCA: 111] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 02/11/2021] [Accepted: 07/19/2021] [Indexed: 12/27/2022]
Abstract
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Collapse
Affiliation(s)
- Liberty S Hamilton
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Yulia Oganian
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Jeffery Hall
- Department of Neurology and Neurosurgery, McGill University Montreal Neurological Institute, Montreal, QC, H3A 2B4, Canada
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
3
|
Logerot P, Smith PF, Wild M, Kubke MF. Auditory processing in the zebra finch midbrain: single unit responses and effect of rearing experience. PeerJ 2020; 8:e9363. [PMID: 32775046 PMCID: PMC7384439 DOI: 10.7717/peerj.9363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 05/26/2020] [Indexed: 11/26/2022] Open
Abstract
In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.
Collapse
Affiliation(s)
- Priscilla Logerot
- Anatomy and Medical Imaging, University of Auckland, University of Auckland, Auckland, New Zealand
| | - Paul F. Smith
- Dept. of Pharmacology and Toxicology, School of Biomedical Sciences, Brain Health Research Centre, Brain Research New Zealand, and Eisdell Moore Centre, University of Otago, Dunedin, New Zealand
| | - Martin Wild
- Anatomy and Medical Imaging and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| | - M. Fabiana Kubke
- Anatomy and Medical Imaging, Centre for Brain Research and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| |
Collapse
|
4
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
5
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
6
|
Lowet E, Gips B, Roberts MJ, De Weerd P, Jensen O, van der Eerden J. Microsaccade-rhythmic modulation of neural synchronization and coding within and across cortical areas V1 and V2. PLoS Biol 2018; 16:e2004132. [PMID: 29851960 PMCID: PMC5997357 DOI: 10.1371/journal.pbio.2004132] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2017] [Revised: 06/12/2018] [Accepted: 05/01/2018] [Indexed: 12/13/2022] Open
Abstract
Primates sample their visual environment actively through saccades and microsaccades (MSs). Saccadic eye movements not only modulate neural spike rates but might also affect temporal correlations (synchrony) among neurons. Neural synchrony plays a role in neural coding and modulates information transfer between cortical areas. The question arises of how eye movements shape neural synchrony within and across cortical areas and how it affects visual processing. Through local field recordings in macaque early visual cortex while monitoring eye position and through neural network simulations, we find 2 distinct synchrony regimes in early visual cortex that are embedded in a 3- to 4-Hz MS-related rhythm during visual fixation. In the period shortly after an MS (“transient period”), synchrony was high within and between cortical areas. In the subsequent period (“sustained period”), overall synchrony dropped and became selective to stimulus properties. Only mutually connected neurons with similar stimulus responses exhibited sustained narrow-band gamma synchrony (25–80 Hz), both within and across cortical areas. Recordings in macaque V1 and V2 matched the model predictions. Furthermore, our modeling provides predictions on how (micro)saccade-modulated gamma synchrony in V1 shapes V2 receptive fields (RFs). We suggest that the rhythmic alternation between synchronization regimes represents a basic repeating sampling strategy of the visual system. During visual exploration, we continuously move our eyes in a quick, coordinated manner several times a second to scan our environment. These movements are called saccades. Even while we fixate on a visual object, we unconsciously execute small saccades that are termed microsaccades (MSs). Despite MSs being relatively small, they are suggested to be critical to maintain and support accurate perception during visual fixation. Here, we studied in macaques the influence of MSs on the synchronization of neural rhythms—which are important to regulate information flow in the brain—in areas of the cerebral cortex that are important for early processing of visual information, and we complemented the analysis with computational modeling. We found that synchronization properties shortly after an MS were distinct from synchronization in the later phase. Specifically, we found an early and spectrally broadband synchronization within and between visual cortices that was broadly tuned over the cortical space and stimulus properties. This was followed by narrow-band synchronization in the gamma range (25–80 Hz) that was spatially and stimulus specific. This suggests that the manner in which information is transmitted and integrated between early visual cortices depends on the timing relative to MSs. We illustrate this in a computational model showing that the receptive field (RF) of neurons in the secondary visual cortex are expected to be different depending on MS timing. Our results highlight the significance of MS timing for understanding cortical dynamics and suggest that the regulation of synchronization might be one mechanism by which MSs support visual perception.
Collapse
Affiliation(s)
- Eric Lowet
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
- * E-mail:
| | - Bart Gips
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| | - Mark J. Roberts
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Peter De Weerd
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, the Netherlands
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Jan van der Eerden
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| |
Collapse
|
7
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
8
|
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates. PLoS One 2017; 12:e0183914. [PMID: 28877194 PMCID: PMC5587334 DOI: 10.1371/journal.pone.0183914] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 08/14/2017] [Indexed: 11/19/2022] Open
Abstract
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis.
Collapse
|
9
|
Yildiz IB, Mesgarani N, Deneve S. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields. J Neurosci 2016; 36:12338-12350. [PMID: 27927954 PMCID: PMC5148225 DOI: 10.1523/jneurosci.4648-15.2016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2015] [Revised: 09/18/2016] [Accepted: 09/20/2016] [Indexed: 11/23/2022] Open
Abstract
UNLABELLED A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.
Collapse
Affiliation(s)
- Izzet B Yildiz
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Sophie Deneve
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, 75005 Paris, France, and
| |
Collapse
|
10
|
Westö J, May PJC. Capturing contextual effects in spectro-temporal receptive fields. Hear Res 2016; 339:195-210. [PMID: 27473504 DOI: 10.1016/j.heares.2016.07.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Revised: 06/16/2016] [Accepted: 07/24/2016] [Indexed: 11/25/2022]
Abstract
Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland.
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany.
| |
Collapse
|
11
|
Friederich U, Billings SA, Hardie RC, Juusola M, Coca D. Fly Photoreceptors Encode Phase Congruency. PLoS One 2016; 11:e0157993. [PMID: 27336733 PMCID: PMC4919002 DOI: 10.1371/journal.pone.0157993] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 06/08/2016] [Indexed: 11/19/2022] Open
Abstract
More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to efficiently encode natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. Here we reverse-engineer the neural code of Drosophila photoreceptors and show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode phase-related features of temporal stimuli, such as local phase congruency, which are invariant to changes in illumination and contrast. We demonstrate that to mitigate for the inherent sensitivity to noise of the local phase congruency measure, the nonlinear coding mechanisms of the fly photoreceptors are tuned to suppress random phase signals, which explains why photoreceptor responses to naturalistic stimuli are significantly different from their responses to white noise stimuli.
Collapse
Affiliation(s)
- Uwe Friederich
- Department of Automatic Control & Systems Engineering, the University of Sheffield, Mappin Street, Sheffield, S1 3JD, United Kingdom
| | - Stephen A. Billings
- Department of Automatic Control & Systems Engineering, the University of Sheffield, Mappin Street, Sheffield, S1 3JD, United Kingdom
| | - Roger C. Hardie
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, CB2 3DY, United Kingdom
| | - Mikko Juusola
- Department of Biomedical Science, the University of Sheffield, Western Bank, Sheffield, S10 2TN, United Kingdom
| | - Daniel Coca
- Department of Automatic Control & Systems Engineering, the University of Sheffield, Mappin Street, Sheffield, S1 3JD, United Kingdom
| |
Collapse
|
12
|
Abstract
This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.
Collapse
|
13
|
Froemke RC, Schreiner CE. Synaptic plasticity as a cortical coding scheme. Curr Opin Neurobiol 2015; 35:185-99. [PMID: 26497430 DOI: 10.1016/j.conb.2015.10.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Revised: 10/02/2015] [Accepted: 10/05/2015] [Indexed: 12/31/2022]
Abstract
Processing of auditory information requires constant adjustment due to alterations of the environment and changing conditions in the nervous system with age, health, and experience. Consequently, patterns of activity in cortical networks have complex dynamics over a wide range of timescales, from milliseconds to days and longer. In the primary auditory cortex (AI), multiple forms of adaptation and plasticity shape synaptic input and action potential output. However, the variance of neuronal responses has made it difficult to characterize AI receptive fields and to determine the function of AI in processing auditory information such as vocalizations. Here we describe recent studies on the temporal modulation of cortical responses and consider the relation of synaptic plasticity to neural coding.
Collapse
Affiliation(s)
- Robert C Froemke
- Skirball Institute for Biomolecular Medicine, Neuroscience Institute, Departments of Otolaryngology, Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA; Center for Neural Science, New York University, New York, NY, USA.
| | - Christoph E Schreiner
- Coleman Memorial Laboratory and W.M. Keck Foundation Center for Integrative Neuroscience, Neuroscience Graduate Group, Department of Otolaryngology, University of California, San Francisco, CA, USA
| |
Collapse
|
14
|
Clemens J, Rau F, Hennig RM, Hildebrandt KJ. Context-dependent coding and gain control in the auditory system of crickets. Eur J Neurosci 2015; 42:2390-406. [PMID: 26179973 DOI: 10.1111/ejn.13019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Revised: 07/07/2015] [Accepted: 07/08/2015] [Indexed: 11/29/2022]
Abstract
Sensory systems process stimuli that greatly vary in intensity and complexity. To maintain efficient information transmission, neural systems need to adjust their properties to these different sensory contexts, yielding adaptive or stimulus-dependent codes. Here, we demonstrated adaptive spectrotemporal tuning in a small neural network, i.e. the peripheral auditory system of the cricket. We found that tuning of cricket auditory neurons was sharper for complex multi-band than for simple single-band stimuli. Information theoretical considerations revealed that this sharpening improved information transmission by separating the neural representations of individual stimulus components. A network model inspired by the structure of the cricket auditory system suggested two putative mechanisms underlying this adaptive tuning: a saturating peripheral nonlinearity could change the spectral tuning, whereas broad feed-forward inhibition was able to reproduce the observed adaptive sharpening of temporal tuning. Our study revealed a surprisingly dynamic code usually found in more complex nervous systems and suggested that stimulus-dependent codes could be implemented using common neural computations.
Collapse
Affiliation(s)
- Jan Clemens
- Behavioral Physiology Group, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Princeton Neuroscience Institute, Princeton University, Washington Road, Princeton, NJ 08540, USA
| | - Florian Rau
- Behavioral Physiology Group, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - R Matthias Hennig
- Behavioral Physiology Group, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - K Jannis Hildebrandt
- Cluster of Excellence 'Hearing4all', Department for Neuroscience, University of Oldenburg, Oldenburg, Germany.,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
15
|
Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations. J Neurosci Methods 2015; 246:119-33. [PMID: 25744059 DOI: 10.1016/j.jneumeth.2015.02.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 01/22/2015] [Accepted: 02/11/2015] [Indexed: 11/23/2022]
Abstract
BACKGROUND The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. NEW METHOD Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. RESULTS AND COMPARISON WITH EXISTING METHOD Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. CONCLUSIONS The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves.
Collapse
|
16
|
Coding principles of the canonical cortical microcircuit in the avian brain. Proc Natl Acad Sci U S A 2015; 112:3517-22. [PMID: 25691736 DOI: 10.1073/pnas.1408545112] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Mammalian neocortex is characterized by a layered architecture and a common or "canonical" microcircuit governing information flow among layers. This microcircuit is thought to underlie the computations required for complex behavior. Despite the absence of a six-layered cortex, birds are capable of complex cognition and behavior. In addition, the avian auditory pallium is composed of adjacent information-processing regions with genetically identified neuron types and projections among regions comparable with those found in the neocortex. Here, we show that the avian auditory pallium exhibits the same information-processing principles that define the canonical cortical microcircuit, long thought to have evolved only in mammals. These results suggest that the canonical cortical microcircuit evolved in a common ancestor of mammals and birds and provide a physiological explanation for the evolution of neural processes that give rise to complex behavior in the absence of cortical lamination.
Collapse
|
17
|
Habitat-related differences in auditory processing of complex tones and vocal signal properties in four songbirds. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2015; 201:395-410. [DOI: 10.1007/s00359-015-0986-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2014] [Revised: 12/22/2014] [Accepted: 01/30/2015] [Indexed: 11/25/2022]
|
18
|
Online stimulus optimization rapidly reveals multidimensional selectivity in auditory cortical neurons. J Neurosci 2014; 34:8963-75. [PMID: 24990917 DOI: 10.1523/jneurosci.0260-14.2014] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.
Collapse
|
19
|
Razak KA, Fuzessery ZM. Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat. Dev Neurobiol 2014; 75:1125-39. [PMID: 25142131 DOI: 10.1002/dneu.22226] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2014] [Revised: 05/27/2014] [Accepted: 08/14/2014] [Indexed: 12/21/2022]
Abstract
Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing.
Collapse
Affiliation(s)
- Khaleel A Razak
- Department of Psychology and Graduate Neuroscience Program, University of California, Riverside, California
| | - Zoltan M Fuzessery
- Department of Zoology and Physiology, University of Wyoming, Laramie, Wyoming
| |
Collapse
|
20
|
|
21
|
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear Res 2013; 305:45-56. [PMID: 23726970 DOI: 10.1016/j.heares.2013.05.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Revised: 03/23/2013] [Accepted: 05/11/2013] [Indexed: 11/23/2022]
Abstract
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
22
|
Abstract
Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds.
Collapse
|
23
|
Hurley LM, Sullivan MR. From behavioral context to receptors: serotonergic modulatory pathways in the IC. Front Neural Circuits 2012; 6:58. [PMID: 22973195 PMCID: PMC3434355 DOI: 10.3389/fncir.2012.00058] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 08/10/2012] [Indexed: 12/18/2022] Open
Abstract
In addition to ascending, descending, and lateral auditory projections, inputs extrinsic to the auditory system also influence neural processing in the inferior colliculus (IC). These types of inputs often have an important role in signaling salient factors such as behavioral context or internal state. One route for such extrinsic information is through centralized neuromodulatory networks like the serotonergic system. Serotonergic inputs to the IC originate from centralized raphe nuclei, release serotonin in the IC, and activate serotonin receptors expressed by auditory neurons. Different types of serotonin receptors act as parallel pathways regulating specific features of circuitry within the IC. This results from variation in subcellular localizations and effector pathways of different receptors, which consequently influence auditory responses in distinct ways. Serotonin receptors may regulate GABAergic inhibition, influence response gain, alter spike timing, or have effects that are dependent on the level of activity. Serotonin receptor types additionally interact in nonadditive ways to produce distinct combinatorial effects. This array of effects of serotonin is likely to depend on behavioral context, since the levels of serotonin in the IC transiently increase during behavioral events including stressful situations and social interaction. These studies support a broad model of serotonin receptors as a link between behavioral context and reconfiguration of circuitry in the IC, and the resulting possibility that plasticity at the level of specific receptor types could alter the relationship between context and circuit function.
Collapse
Affiliation(s)
- Laura M Hurley
- Department of Biology, Center for the Integrative Study of Animal Behavior, Indiana University Bloomington, IN, USA
| | | |
Collapse
|
24
|
Woolley SMN. Early experience shapes vocal neural coding and perception in songbirds. Dev Psychobiol 2012; 54:612-31. [PMID: 22711657 PMCID: PMC3404257 DOI: 10.1002/dev.21014] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2010] [Accepted: 01/09/2012] [Indexed: 11/09/2022]
Abstract
Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production, and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception, and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals.
Collapse
Affiliation(s)
- Sarah M N Woolley
- Department of Psychology, Columbia University, 406 Schermerhorn Hall, 1190 Amsterdam Ave., New York, NY 10027, USA.
| |
Collapse
|
25
|
Precise feature based time scales and frequency decorrelation lead to a sparse auditory code. J Neurosci 2012; 32:8454-68. [PMID: 22723685 DOI: 10.1523/jneurosci.6506-11.2012] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Sparse redundancy reducing codes have been proposed as efficient strategies for representing sensory stimuli. A prevailing hypothesis suggests that sensory representations shift from dense redundant codes in the periphery to selective sparse codes in cortex. We propose an alternative framework where sparseness and redundancy depend on sensory integration time scales and demonstrate that the central nucleus of the inferior colliculus (ICC) of cats encodes sound features by precise sparse spike trains. Direct comparisons with auditory cortical neurons demonstrate that ICC responses were sparse and uncorrelated as long as the spike train time scales were matched to the sensory integration time scales relevant to ICC neurons. Intriguingly, correlated spiking in the ICC was substantially lower than predicted by linear or nonlinear models and strictly observed for neurons with best frequencies within a "critical band," the hallmark of perceptual frequency resolution in mammals. This is consistent with a sparse asynchronous code throughout much of the ICC and a complementary correlation code within a critical band that may allow grouping of perceptually relevant cues.
Collapse
|
26
|
Williams AJ, Fuzessery ZM. Multiple mechanisms shape FM sweep rate selectivity: complementary or redundant? Front Neural Circuits 2012; 6:54. [PMID: 22912604 PMCID: PMC3421451 DOI: 10.3389/fncir.2012.00054] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2012] [Accepted: 07/30/2012] [Indexed: 11/16/2022] Open
Abstract
Auditory neurons in the inferior colliculus (IC) of the pallid bat have highly rate selective responses to downward frequency modulated (FM) sweeps attributable to the spectrotemporal pattern of their echolocation call (a brief FM pulse). Several mechanisms are known to shape FM rate selectivity within the pallid bat IC. Here we explore how two mechanisms, stimulus duration and high-frequency inhibition (HFI), can interact to shape FM rate selectivity within the same neuron. Results from extracellular recordings indicated that a derived duration-rate function (based on tonal response) was highly predictive of the shape of the FM rate response. Longpass duration selectivity for tones was predictive of slowpass rate selectivity for FM sweeps, both of which required long stimulus durations and remained intact following iontophoretic blockade of inhibitory input. Bandpass duration selectivity for tones, sensitive to only a narrow range of tone durations, was predictive of bandpass rate selectivity for FM sweeps. Conversion of the tone duration response from bandpass to longpass after blocking inhibition was coincident with a change in FM rate selectivity from bandpass to slowpass indicating an active inhibitory component to the formation of bandpass selectivity. Independent of the effect of duration tuning on FM rate selectivity, the presence of HFI acted as a fastpass FM rate filter by suppressing slow FM sweep rates. In cases where both mechanisms were present, both had to be eliminated, by removing inhibition, before bandpass FM rate selectivity was affected. It is unknown why the auditory system utilizes multiple mechanisms capable of shaping identical forms of FM rate selectivity though it may represent distinct but convergent modes of neural signaling directed at shaping response selectivity for important biologically relevant sounds.
Collapse
Affiliation(s)
- Anthony J Williams
- Department of Zoology and Physiology, University of Wyoming Laramie, WY, USA
| | | |
Collapse
|