1
|
Regev TI, Markusfeld G, Deouell LY, Nelken I. Context Sensitivity across Multiple Time scales with a Flexible Frequency Bandwidth. Cereb Cortex 2021; 32:158-175. [PMID: 34289019 DOI: 10.1093/cercor/bhab200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 05/29/2021] [Accepted: 06/07/2021] [Indexed: 12/15/2022] Open
Abstract
Everyday auditory streams are complex, including spectro-temporal content that varies at multiple timescales. Using EEG, we investigated the sensitivity of human auditory cortex to the content of past stimulation in unattended sequences of equiprobable tones. In 3 experiments including 82 participants overall, we found that neural responses measured at different latencies after stimulus onset were sensitive to frequency intervals computed over distinct timescales. Importantly, early responses were sensitive to a longer history of stimulation than later responses. To account for these results, we tested a model consisting of neural populations with frequency-specific but broad tuning that undergo adaptation with exponential recovery. We found that the coexistence of neural populations with distinct recovery rates can explain our results. Furthermore, the adaptation bandwidth of these populations depended on spectral context-it was wider when the stimulation sequence had a wider frequency range. Our results provide electrophysiological evidence as well as a possible mechanistic explanation for dynamic and multiscale context-dependent auditory processing in the human cortex.
Collapse
Affiliation(s)
- Tamar I Regev
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,MIT Department of Brain and Cognitive Sciences, Cambridge, MA 02139, USA
| | - Geffen Markusfeld
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Leon Y Deouell
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,Department of Psychology, The Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel.,Department of Neurobiology, The Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|
2
|
Mapping the human auditory cortex using spectrotemporal receptive fields generated with magnetoencephalography. Neuroimage 2021; 238:118222. [PMID: 34058330 DOI: 10.1016/j.neuroimage.2021.118222] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/24/2022] Open
Abstract
We present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a temporally dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.
Collapse
|
3
|
Homma NY, Atencio CA, Schreiner CE. Plasticity of Multidimensional Receptive Fields in Core Rat Auditory Cortex Directed by Sound Statistics. Neuroscience 2021; 467:150-170. [PMID: 33951506 DOI: 10.1016/j.neuroscience.2021.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 04/09/2021] [Accepted: 04/24/2021] [Indexed: 11/17/2022]
Abstract
Sensory cortical neurons can nonlinearly integrate a wide range of inputs. The outcome of this nonlinear process can be approximated by more than one receptive field component or filter to characterize the ensuing stimulus preference. The functional properties of multidimensional filters are, however, not well understood. Here we estimated two spectrotemporal receptive fields (STRFs) per neuron using maximally informative dimension analysis. We compared their temporal and spectral modulation properties and determined the stimulus information captured by the two STRFs in core rat auditory cortical fields, primary auditory cortex (A1) and ventral auditory field (VAF). The first STRF is the dominant filter and acts as a sound feature detector in both fields. The second STRF is less feature specific, preferred lower modulations, and had less spike information compared to the first STRF. The information jointly captured by the two STRFs was larger than that captured by the sum of the individual STRFs, reflecting nonlinear interactions of two filters. This information gain was larger in A1. We next determined how the acoustic environment affects the structure and relationship of these two STRFs. Rats were exposed to moderate levels of spectrotemporally modulated noise during development. Noise exposure strongly altered the spectrotemporal preference of the first STRF in both cortical fields. The interaction between the two STRFs was reduced by noise exposure in A1 but not in VAF. The results reveal new functional distinctions between A1 and VAF indicating that (i) A1 has stronger interactions of the two STRFs than VAF, (ii) noise exposure diminishes modulation parameter representation contained in the noise more strongly for the first STRF in both fields, and (iii) plasticity induced by noise exposure can affect the strength of filter interactions in A1. Taken together, ascertaining two STRFs per neuron enhances the understanding of cortical information processing and plasticity effects in core auditory cortex.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA.
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA
| |
Collapse
|
4
|
Atencio CA, Sharpee TO. Multidimensional receptive field processing by cat primary auditory cortical neurons. Neuroscience 2017; 359:130-141. [PMID: 28694174 DOI: 10.1016/j.neuroscience.2017.07.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 06/03/2017] [Accepted: 07/03/2017] [Indexed: 12/01/2022]
Abstract
The receptive fields of many auditory cortical neurons are multidimensional and are best represented by more than one stimulus feature. The number of these dimensions, their characteristics, and how they differ with stimulus context have been relatively unexplored. Standard methods that are often used to characterize multidimensional stimulus selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MIDs), are either limited to Gaussian stimuli or are only able to recover a small number of stimulus features due to data limitations. An information theoretic extension of STC, the maximum noise entropy (MNE) model, can be used with non-Gaussian stimulus distributions to find an arbitrary number of stimulus dimensions. When we applied the MNE model to auditory cortical neurons, we often found more than two stimulus features that influenced neuronal firing. Excitatory and suppressive features coded different acoustic contexts: excitatory features encoded higher temporal and spectral modulations, while suppressive features had lower modulation frequency preferences. We found that the excitatory and suppressive features themselves were sensitive to stimulus context when we employed two stimuli that differed only in their short-term correlation structure: while the linear features were similar, the secondary features were strongly affected by stimulus statistics. These results show that multidimensional receptive field processing is influenced by feature type and stimulus context.
Collapse
Affiliation(s)
- Craig A Atencio
- Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-HNS, University of California, San Francisco, USA.
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA, USA; Center for Theoretical Biological Physics and Department of Physics, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
5
|
Natan RG, Carruthers IM, Mwilambwe-Tshilobo L, Geffen MN. Gain Control in the Auditory Cortex Evoked by Changing Temporal Correlation of Sounds. Cereb Cortex 2017; 27:2385-2402. [PMID: 27095823 DOI: 10.1093/cercor/bhw083] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Natural sounds exhibit statistical variation in their spectrotemporal structure. This variation is central to identification of unique environmental sounds and to vocal communication. Using limited resources, the auditory system must create a faithful representation of sounds across the full range of variation in temporal statistics. Imaging studies in humans demonstrated that the auditory cortex is sensitive to temporal correlations. However, the mechanisms by which the auditory cortex represents the spectrotemporal structure of sounds and how neuronal activity adjusts to vastly different statistics remain poorly understood. In this study, we recorded responses of neurons in the primary auditory cortex of awake rats to sounds with systematically varied temporal correlation, to determine whether and how this feature alters sound encoding. Neuronal responses adapted to changing stimulus temporal correlation. This adaptation was mediated by a change in the firing rate gain of neuronal responses rather than their spectrotemporal properties. This gain adaptation allowed neurons to maintain similar firing rates across stimuli with different statistics, preserving their ability to efficiently encode temporal modulation. This dynamic gain control mechanism may underlie comprehension of vocalizations and other natural sounds under different contexts, subject to distortions in temporal correlation structure via stretching or compression.
Collapse
Affiliation(s)
- Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Neuroscience
| | - Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Physics
| | | | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Neuroscience.,Graduate Group in Physics.,Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
6
|
Distinguishing Neural Adaptation and Predictive Coding Hypotheses in Auditory Change Detection. Brain Topogr 2016; 30:136-148. [PMID: 27752799 DOI: 10.1007/s10548-016-0529-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
The auditory mismatch negativity (MMN) component of event-related potentials (ERPs) has served as a neural index of auditory change detection. MMN is elicited by presentation of infrequent (deviant) sounds randomly interspersed among frequent (standard) sounds. Deviants elicit a larger negative deflection in the ERP waveform compared to the standard. There is considerable debate as to whether the neural mechanism of this change detection response is due to release from neural adaptation (neural adaptation hypothesis) or from a prediction error signal (predictive coding hypothesis). Previous studies have not been able to distinguish between these explanations because paradigms typically confound the two. The current study disambiguated effects of stimulus-specific adaptation from expectation violation using a unique stimulus design that compared expectation violation responses that did and did not involve stimulus change. The expectation violation response without the stimulus change differed in timing, scalp distribution, and attentional modulation from the more typical MMN response. There is insufficient evidence from the current study to suggest that the negative deflection elicited by the expectation violation alone includes the MMN. Thus, we offer a novel hypothesis that the expectation violation response reflects a fundamentally different neural substrate than that attributed to the canonical MMN.
Collapse
|
7
|
Westö J, May PJC. Capturing contextual effects in spectro-temporal receptive fields. Hear Res 2016; 339:195-210. [PMID: 27473504 DOI: 10.1016/j.heares.2016.07.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Revised: 06/16/2016] [Accepted: 07/24/2016] [Indexed: 11/25/2022]
Abstract
Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland.
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany.
| |
Collapse
|
8
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
9
|
Abstract
Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| | - Patrick J. C. May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany
| | - Hannu Tiitinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland
| |
Collapse
|
10
|
Blackwell JM, Taillefumier TO, Natan RG, Carruthers IM, Magnasco MO, Geffen MN. Stable encoding of sounds over a broad range of statistical parameters in the auditory cortex. Eur J Neurosci 2016; 43:751-64. [PMID: 26663571 PMCID: PMC5021175 DOI: 10.1111/ejn.13144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 11/22/2015] [Accepted: 12/01/2015] [Indexed: 11/29/2022]
Abstract
Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale‐invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro‐temporal density and cyclo‐temporal statistics over several orders of magnitude, corresponding to a range of water‐like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Thibaud O Taillefumier
- Center for Physics and Biology, Rockefeller University, New York, NY, USA.,Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, USA
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Marcelo O Magnasco
- Center for Physics and Biology, Rockefeller University, New York, NY, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Center for Physics and Biology, Rockefeller University, New York, NY, USA
| |
Collapse
|
11
|
Carruthers IM, Laplagne DA, Jaegle A, Briguglio JJ, Mwilambwe-Tshilobo L, Natan RG, Geffen MN. Emergence of invariant representation of vocalizations in the auditory cortex. J Neurophysiol 2015; 114:2726-40. [PMID: 26311178 DOI: 10.1152/jn.00095.2015] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 08/25/2015] [Indexed: 11/22/2022] Open
Abstract
An essential task of the auditory system is to discriminate between different communication signals, such as vocalizations. In everyday acoustic environments, the auditory system needs to be capable of performing the discrimination under different acoustic distortions of vocalizations. To achieve this, the auditory system is thought to build a representation of vocalizations that is invariant to their basic acoustic transformations. The mechanism by which neuronal populations create such an invariant representation within the auditory cortex is only beginning to be understood. We recorded the responses of populations of neurons in the primary and nonprimary auditory cortex of rats to original and acoustically distorted vocalizations. We found that populations of neurons in the nonprimary auditory cortex exhibited greater invariance in encoding vocalizations over acoustic transformations than neuronal populations in the primary auditory cortex. These findings are consistent with the hypothesis that invariant representations are created gradually through hierarchical transformation within the auditory pathway.
Collapse
Affiliation(s)
- Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Diego A Laplagne
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Andrew Jaegle
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| | - John J Briguglio
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Laetitia Mwilambwe-Tshilobo
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
12
|
Montejo N, Noreña AJ. Dynamic representation of spectral edges in guinea pig primary auditory cortex. J Neurophysiol 2015; 113:2998-3012. [PMID: 25744885 PMCID: PMC4416612 DOI: 10.1152/jn.00785.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Accepted: 03/02/2015] [Indexed: 11/22/2022] Open
Abstract
The central representation of a given acoustic motif is thought to be strongly context dependent, i.e., to rely on the spectrotemporal past and present of the acoustic mixture in which it is embedded. The present study investigated the cortical representation of spectral edges (i.e., where stimulus energy changes abruptly over frequency) and its dependence on stimulus duration and depth of the spectral contrast in guinea pig. We devised a stimulus ensemble composed of random tone pips with or without an attenuated frequency band (AFB) of variable depth. Additionally, the multitone ensemble with AFB was interleaved with periods of silence or with multitone ensembles without AFB. We have shown that the representation of the frequencies near but outside the AFB is greatly enhanced, whereas the representation of frequencies near and inside the AFB is strongly suppressed. These cortical changes depend on the depth of the AFB: although they are maximal for the largest depth of the AFB, they are also statistically significant for depths as small as 10 dB. Finally, the cortical changes are quick, occurring within a few seconds of stimulus ensemble presentation with AFB, and are very labile, disappearing within a few seconds after the presentation without AFB. Overall, this study demonstrates that the representation of spectral edges is dynamically enhanced in the auditory centers. These central changes may have important functional implications, particularly in noisy environments where they could contribute to preserving the central representation of spectral edges.
Collapse
Affiliation(s)
- Noelia Montejo
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| | - Arnaud J Noreña
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| |
Collapse
|
13
|
Spectrotemporal response properties of core auditory cortex neurons in awake monkey. PLoS One 2015; 10:e0116118. [PMID: 25680187 PMCID: PMC4332665 DOI: 10.1371/journal.pone.0116118] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2014] [Accepted: 12/03/2014] [Indexed: 11/19/2022] Open
Abstract
So far, most studies of core auditory cortex (AC) have characterized the spectral and temporal tuning properties of cells in non-awake, anesthetized preparations. As experiments in awake animals are scarce, we here used dynamic spectral-temporal broadband ripples to study the properties of the spectrotemporal receptive fields (STRFs) of AC cells in awake monkeys. We show that AC neurons were typically most sensitive to low ripple densities (spectral) and low velocities (temporal), and that most cells were not selective for a particular spectrotemporal sweep direction. A substantial proportion of neurons preferred amplitude-modulated sounds (at zero ripple density) to dynamic ripples (at non-zero densities). The vast majority (>93%) of modulation transfer functions were separable with respect to spectral and temporal modulations, indicating that time and spectrum are independently processed in AC neurons. We also analyzed the linear predictability of AC responses to natural vocalizations on the basis of the STRF. We discuss our findings in the light of results obtained from the monkey midbrain inferior colliculus by comparing the spectrotemporal tuning properties and linear predictability of these two important auditory stages.
Collapse
|
14
|
A new and fast characterization of multiple encoding properties of auditory neurons. Brain Topogr 2014; 28:379-400. [PMID: 24869676 DOI: 10.1007/s10548-014-0375-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2013] [Accepted: 05/07/2014] [Indexed: 10/25/2022]
Abstract
The functional properties of auditory cortex neurons are most often investigated separately, through spectrotemporal receptive fields (STRFs) for the frequency tuning and the use of frequency sweeps sounds for selectivity to velocity and direction. In fact, auditory neurons are sensitive to a multidimensional space of acoustic parameters where spectral, temporal and spatial dimensions interact. We designed a multi-parameter stimulus, the random double sweep (RDS), composed of two uncorrelated random sweeps, which gives an easy, fast and simultaneous access to frequency tuning as well as frequency modulation sweep direction and velocity selectivity, frequency interactions and temporal properties of neurons. Reverse correlation techniques applied to recordings from the primary auditory cortex of guinea pigs and rats in response to RDS stimulation revealed the variety of temporal dynamics of acoustic patterns evoking an enhanced or suppressed firing rate. Group results on these two species revealed less frequent suppression areas in frequency tuning STRFs, the absence of downward sweep selectivity, and lower phase locking abilities in the auditory cortex of rats compared to guinea pigs.
Collapse
|
15
|
Rabinowitz NC, Willmore BDB, King AJ, Schnupp JWH. Constructing noise-invariant representations of sound in the auditory pathway. PLoS Biol 2013; 11:e1001710. [PMID: 24265596 PMCID: PMC3825667 DOI: 10.1371/journal.pbio.1001710] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 10/04/2013] [Indexed: 11/18/2022] Open
Abstract
Along the auditory pathway from auditory nerve to midbrain to cortex, individual neurons adapt progressively to sound statistics, enabling the discernment of foreground sounds, such as speech, over background noise. Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. We rarely hear sounds (such as someone talking) in isolation, but rather against a background of noise. When mixtures of sounds and background noise reach the ears, peripheral auditory neurons represent the whole sound mixture. Previous evidence suggests, however, that the higher auditory brain represents just the sounds of interest, and is less affected by the presence of background noise. The neural mechanisms underlying this transformation are poorly understood. Here, we investigate these mechanisms by studying the representation of sound by populations of neurons at three stages along the auditory pathway; we simulate the auditory nerve and record from neurons in the midbrain and primary auditory cortex of anesthetized ferrets. We find that the transformation from noise-sensitive representations of sound to noise-tolerant processing takes place gradually along the pathway from auditory nerve to midbrain to cortex. Our results suggest that this results from neurons adapting to the statistics of heard sounds.
Collapse
Affiliation(s)
- Neil C. Rabinowitz
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- Center for Neural Science, New York University, New York, New York, United States of America
- * E-mail: (N.C.R.); (J.W.H.S.)
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W. H. Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
- * E-mail: (N.C.R.); (J.W.H.S.)
| |
Collapse
|
16
|
Kamal B, Holman C, de Villers-Sidani E. Shaping the aging brain: role of auditory input patterns in the emergence of auditory cortical impairments. Front Syst Neurosci 2013; 7:52. [PMID: 24062649 PMCID: PMC3775538 DOI: 10.3389/fnsys.2013.00052] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Accepted: 08/27/2013] [Indexed: 12/19/2022] Open
Abstract
Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.
Collapse
Affiliation(s)
- Brishna Kamal
- Department of Neurology and Neurosurgery, Montreal Neurological Institute Montreal, QC, Canada
| | | | | |
Collapse
|
17
|
Catz N, Noreña AJ. Enhanced representation of spectral contrasts in the primary auditory cortex. Front Syst Neurosci 2013; 7:21. [PMID: 23801943 PMCID: PMC3686080 DOI: 10.3389/fnsys.2013.00021] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2013] [Accepted: 05/23/2013] [Indexed: 11/15/2022] Open
Abstract
The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e., regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges), it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.
Collapse
Affiliation(s)
- Nicolas Catz
- Laboratory of Adaptive and Integrative Neurobiology, Fédération de recherche 3C, UMR CNRS 7260, Université Aix-Marseille Marseille, France
| | | |
Collapse
|
18
|
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat. PLoS One 2013; 8:e63655. [PMID: 23671691 PMCID: PMC3646040 DOI: 10.1371/journal.pone.0063655] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 04/04/2013] [Indexed: 11/29/2022] Open
Abstract
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Collapse
|
19
|
Abstract
Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds.
Collapse
|
20
|
High-density multielectrode array with independently maneuverable electrodes and silicone oil fluid isolation system for chronic recording from macaque monkey. J Neurosci Methods 2012; 211:114-24. [PMID: 22939944 DOI: 10.1016/j.jneumeth.2012.08.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2012] [Revised: 08/21/2012] [Accepted: 08/21/2012] [Indexed: 11/21/2022]
Abstract
Chronic multielectrode recording has become a widely used technique in the past twenty years, and there are multiple standardized methods. As for recording with high-density array, the most common method in macaque monkeys is to use a subdural array with fixed electrodes. In this study, we utilized the electrode array with independently maneuverable electrodes arranged in high-density, which was originally designed for use on small animals, and redesigned it for use on macaque monkeys while maintaining the virtues of maneuverability and high-density. We successfully recorded single and multiunit activities from up to 49 channels in the V1 and inferior temporal (IT) cortex of macaque monkeys. The main change in the surgical procedure was to remove a 5 mm diameter area of dura mater. The main changes in the design were (1) to have a constricted layer of heavy silicone oil at the interface with the animal to isolate the electrical circuit from the cerebrospinal fluid, and (2) to have a fluid draining system that can shunt any potential postsurgical subcranial exudate to the extracranial space.
Collapse
|
21
|
Abstract
Sensory receptive fields (RFs) vary as a function of stimulus properties and measurement methods. Previous stimuli or surrounding stimuli facilitate, suppress, or change the selectivity of sensory neurons' responses. Here, we propose that these spatiotemporal contextual dependencies are signatures of efficient perceptual inference and can be explained by a single neural mechanism, input targeted divisive inhibition. To respond both selectively and reliably, sensory neurons should behave as active predictors rather than passive filters. In particular, they should remove input they can predict ("explain away") from the synaptic inputs to all other neurons. This implies that RFs are constantly and dynamically reshaped by the spatial and temporal context, while the true selectivity of sensory neurons resides in their "predictive field." This approach motivates a reinvestigation of sensory representations and particularly the role and specificity of surround suppression and adaptation in sensory areas.
Collapse
|
22
|
Abstract
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation.
Collapse
|
23
|
Extra-classical tuning predicts stimulus-dependent receptive fields in auditory neurons. J Neurosci 2011; 31:11867-78. [PMID: 21849547 DOI: 10.1523/jneurosci.5790-10.2011] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The receptive fields of many sensory neurons are sensitive to statistical differences among classes of complex stimuli. For example, excitatory spectral bandwidths of midbrain auditory neurons and the spatial extent of cortical visual neurons differ during the processing of natural stimuli compared to the processing of artificial stimuli. Experimentally characterizing neuronal nonlinearities that contribute to stimulus-dependent receptive fields is important for understanding how neurons respond to different stimulus classes in multiple sensory modalities. Here we show that in the zebra finch, many auditory midbrain neurons have extra-classical receptive fields, consisting of sideband excitation and sideband inhibition. We also show that the presence, degree, and asymmetry of stimulus-dependent receptive fields during the processing of complex sounds are predicted by the presence, valence, and asymmetry of extra-classical tuning. Neurons for which excitatory bandwidth expands during the processing of song have extra-classical excitation. Neurons for which frequency tuning is static and for which excitatory bandwidth contracts during the processing of song have extra-classical inhibition. Simulation experiments further demonstrate that stimulus-dependent receptive fields can arise from extra-classical tuning with a static spike threshold nonlinearity. These findings demonstrate that a common neuronal nonlinearity can account for the stimulus dependence of receptive fields estimated from the responses of auditory neurons to stimuli with natural and non-natural statistics.
Collapse
|
24
|
Rabinowitz NC, Willmore BDB, Schnupp JWH, King AJ. Contrast gain control in auditory cortex. Neuron 2011; 70:1178-91. [PMID: 21689603 PMCID: PMC3133688 DOI: 10.1016/j.neuron.2011.04.030] [Citation(s) in RCA: 163] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2011] [Indexed: 11/06/2022]
Abstract
The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds.
Collapse
Affiliation(s)
- Neil C Rabinowitz
- Department of Physiology, Anatomy, and Genetics, Sherrington Building, Parks Road, University of Oxford, Oxford OX13PT, UK.
| | | | | | | |
Collapse
|
25
|
Pienkowski M, Eggermont JJ. Sound frequency representation in primary auditory cortex is level tolerant for moderately loud, complex sounds. J Neurophysiol 2011; 106:1016-27. [DOI: 10.1152/jn.00291.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The distribution of neuronal characteristic frequencies over the area of primary auditory cortex (AI) roughly reflects the tonotopic organization of the cochlea. However, because the area of AI activated by any given sound frequency increases erratically with sound level, it has generally been proposed that frequency is represented in AI not with a rate-place code but with some more complex, distributed code. Here, on the basis of both spike and local field potential (LFP) recordings in the anesthetized cat, we show that the tonotopic representation in AI is much more level tolerant when mapped with spectrotemporally dense tone pip ensembles rather than with individually presented tone pips. That is, we show that the tuning properties of individual unit and LFP responses are less variable with sound level under dense compared with sparse stimulation, and that the spatial frequency resolution achieved by the AI neural population at moderate stimulus levels (65 dB SPL) is better with densely than with sparsely presented sounds. This implies that nonlinear processing in the central auditory system can compensate (in part) for the level-dependent coding of sound frequency in the cochlea, and suggests that there may be a functional role for the cortical tonotopic map in the representation of complex sounds.
Collapse
Affiliation(s)
- Martin Pienkowski
- Department of Physiology and Pharmacology,
- Department of Psychology, and
| | - Jos J. Eggermont
- Department of Physiology and Pharmacology,
- Department of Psychology, and
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
26
|
Bartlett EL, Sadagopan S, Wang X. Fine frequency tuning in monkey auditory cortex and thalamus. J Neurophysiol 2011; 106:849-59. [PMID: 21613589 DOI: 10.1152/jn.00559.2010] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The frequency resolution of neurons throughout the ascending auditory pathway is important for understanding how sounds are processed. In many animal studies, the frequency tuning widths are about 1/5th octave wide in auditory nerve fibers and much wider in auditory cortex neurons. Psychophysical studies show that humans are capable of discriminating far finer frequency differences. A recent study suggested that this is perhaps attributable to fine frequency tuning of neurons in human auditory cortex (Bitterman Y, Mukamel R, Malach R, Fried I, Nelken I. Nature 451: 197-201, 2008). We investigated whether such fine frequency tuning was restricted to human auditory cortex by examining the frequency tuning width in the awake common marmoset monkey. We show that 27% of neurons in the primary auditory cortex exhibit frequency tuning that is finer than the typical frequency tuning of the auditory nerve and substantially finer than previously reported cortical data obtained from anesthetized animals. Fine frequency tuning is also present in 76% of neurons of the auditory thalamus in awake marmosets. Frequency tuning was narrower during the sustained response compared to the onset response in auditory cortex neurons but not in thalamic neurons, suggesting that thalamocortical or intracortical dynamics shape time-dependent frequency tuning in cortex. These findings challenge the notion that the fine frequency tuning of auditory cortex is unique to human auditory cortex and that it is a de novo cortical property, suggesting that the broader tuning observed in previous animal studies may arise from the use of anesthesia during physiological recordings or from species differences.
Collapse
Affiliation(s)
- Edward L Bartlett
- Department of Biomedical Engineering, Johns Hopkins University, 720 Rutland Ave., Traylor 410, Baltimore, MD 21205, USA
| | | | | |
Collapse
|
27
|
Eggermont JJ, Munguia R, Pienkowski M, Shaw G. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex. PLoS One 2011; 6:e20046. [PMID: 21625385 PMCID: PMC3100317 DOI: 10.1371/journal.pone.0020046] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2011] [Accepted: 04/11/2011] [Indexed: 11/20/2022] Open
Abstract
Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2–40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4–8 Hz, 8–16 Hz and 16–40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16–40 Hz filtered LFP, compared to those based on the 4–8 Hz, 8–16 Hz and 2–40 Hz filtered LFP. This suggests greater frequency specificity for the 16–40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2–40 Hz filtered LFP (5.5 mm) and the 16–40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2–40 Hz (and 16–40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.
Collapse
Affiliation(s)
- Jos J Eggermont
- Department of Physiology and Pharmacology, University of Calgary, Calgary, Alberta, Canada.
| | | | | | | |
Collapse
|
28
|
Geffen MN, Gervain J, Werker JF, Magnasco MO. Auditory perception of self-similarity in water sounds. Front Integr Neurosci 2011; 5:15. [PMID: 21617734 PMCID: PMC3095814 DOI: 10.3389/fnint.2011.00015] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2011] [Accepted: 04/22/2011] [Indexed: 11/22/2022] Open
Abstract
Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale-invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003). Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the acoustic waveform from the recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.
Collapse
Affiliation(s)
- Maria N Geffen
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Pennsylvania School of Medicine Philadelphia, PA, USA
| | | | | | | |
Collapse
|
29
|
Barbour DL. Intensity-invariant coding in the auditory system. Neurosci Biobehav Rev 2011; 35:2064-72. [PMID: 21540053 DOI: 10.1016/j.neubiorev.2011.04.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2010] [Revised: 04/09/2011] [Accepted: 04/11/2011] [Indexed: 11/27/2022]
Abstract
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding.
Collapse
Affiliation(s)
- Dennis L Barbour
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA.
| |
Collapse
|
30
|
Pienkowski M, Eggermont JJ. Cortical tonotopic map plasticity and behavior. Neurosci Biobehav Rev 2011; 35:2117-28. [PMID: 21315757 DOI: 10.1016/j.neubiorev.2011.02.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2010] [Revised: 02/02/2011] [Accepted: 02/04/2011] [Indexed: 11/16/2022]
Abstract
Central topographic representations of sensory epithelia have a genetic basis, but are refined by patterns of afferent input and by behavioral demands. Here we review such experience-driven map development and plasticity, focusing on the auditory system, and giving particular consideration to its adaptive value and to the putative mechanisms involved. Recent data have challenged the widely held notion that only the developing auditory brain can be influenced by changes to the prevailing acoustic environment, unless those changes convey information of behavioral relevance. Specifically, it has been shown that persistent exposure of adult animals to random, bandlimited, moderately loud sounds can lead to a reorganization of auditory cortex not unlike that following restricted hearing loss. The mature auditory brain is thus more plastic than previously supposed, with potentially troubling consequences for those working or living in noisy environments, even at exposure levels considerably below those presently considered just-acceptable.
Collapse
Affiliation(s)
- Martin Pienkowski
- Hotchkiss Brain Institute, Departments of Physiology and Pharmacology, University of Calgary, Calgary, Alberta, Canada
| | | |
Collapse
|
31
|
Escola S, Fontanini A, Katz D, Paninski L. Hidden Markov models for the stimulus-response relationships of multistate neural systems. Neural Comput 2011; 23:1071-132. [PMID: 21299424 DOI: 10.1162/neco_a_00118] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Given recent experimental results suggesting that neural circuits may evolve through multiple firing states, we develop a framework for estimating state-dependent neural response properties from spike train data. We modify the traditional hidden Markov model (HMM) framework to incorporate stimulus-driven, non-Poisson point-process observations. For maximal flexibility, we allow external, time-varying stimuli and the neurons' own spike histories to drive both the spiking behavior in each state and the transitioning behavior between states. We employ an appropriately modified expectation-maximization algorithm to estimate the model parameters. The expectation step is solved by the standard forward-backward algorithm for HMMs. The maximization step reduces to a set of separable concave optimization problems if the model is restricted slightly. We first test our algorithm on simulated data and are able to fully recover the parameters used to generate the data and accurately recapitulate the sequence of hidden states. We then apply our algorithm to a recently published data set in which the observed neuronal ensembles displayed multistate behavior and show that inclusion of spike history information significantly improves the fit of the model. Additionally, we show that a simple reformulation of the state space of the underlying Markov chain allows us to implement a hybrid half-multistate, half-histogram model that may be more appropriate for capturing the complexity of certain data sets than either a simple HMM or a simple peristimulus time histogram model alone.
Collapse
Affiliation(s)
- Sean Escola
- Center for Theoretical Neuroscience and Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| | | | | | | |
Collapse
|
32
|
Peng Y, Sun X, Zhang J. Contextual modulation of frequency tuning of neurons in the rat auditory cortex. Neuroscience 2010; 169:1403-13. [DOI: 10.1016/j.neuroscience.2010.05.047] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2010] [Revised: 05/06/2010] [Accepted: 05/21/2010] [Indexed: 10/19/2022]
|
33
|
Arc expression and neuroplasticity in primary auditory cortex during initial learning are inversely related to neural activity. Proc Natl Acad Sci U S A 2010; 107:14828-32. [PMID: 20675582 DOI: 10.1073/pnas.1008604107] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Models of learning-dependent sensory cortex plasticity require local activity and reinforcement. An alternative proposes that neural activity involved in anticipation of a sensory stimulus, or the preparatory set, can direct plasticity so that changes could occur in regions of sensory cortex lacking activity. To test the necessity of target-induced activity for initial sensory learning, we trained rats to detect a low-frequency sound. After learning, Arc expression and physiologically measured neuroplasticity were strong in a high-frequency auditory cortex region with very weak target-induced activity in control animals. After 14 sessions, Arc and neuroplasticity were aligned with target-induced activity. The temporal and topographic correspondence between Arc and neuroplasticity suggests Arc may be intrinsic to the neuroplasticity underlying perceptual learning. Furthermore, not all neuroplasticity could be explained by activity-dependent models but can be explained if the neural activity involved in the preparatory set directs plasticity.
Collapse
|
34
|
Amin N, Gill P, Theunissen FE. Role of the zebra finch auditory thalamus in generating complex representations for natural sounds. J Neurophysiol 2010; 104:784-98. [PMID: 20554842 DOI: 10.1152/jn.00128.2010] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We estimated the spectrotemporal receptive fields of neurons in the songbird auditory thalamus, nucleus ovoidalis, and compared the neural representation of complex sounds in the auditory thalamus to those found in the upstream auditory midbrain nucleus, mesencephalicus lateralis dorsalis (MLd), and the downstream auditory pallial region, field L. Our data refute the idea that the primary sensory thalamus acts as a simple, relay nucleus: we find that the auditory thalamic receptive fields obtained in response to song are more complex than the ones found in the midbrain. Moreover, we find that linear tuning diversity and complexity in ovoidalis (Ov) are closer to those found in field L than in MLd. We also find prevalent tuning to intermediate spectral and temporal modulations, a feature that is unique to Ov. Thus even a feed-forward model of the sensory processing chain, where neural responses in the sensory thalamus reveals intermediate response properties between those in the sensory periphery and those in the primary sensory cortex, is inadequate in describing the tuning found in Ov. Based on these results, we believe that the auditory thalamic circuitry plays an important role in generating novel complex representations for specific features found in natural sounds.
Collapse
Affiliation(s)
- Noopur Amin
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720-1650, USA
| | | | | |
Collapse
|
35
|
Contribution of inhibition to stimulus selectivity in primary auditory cortex of awake primates. J Neurosci 2010; 30:7314-25. [PMID: 20505098 DOI: 10.1523/jneurosci.5072-09.2010] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent studies have demonstrated the high selectivity of neurons in primary auditory cortex (A1) and a highly sparse representation of sounds by the population of A1 neurons in awake animals. However, the underlying receptive field structures that confer high selectivity on A1 neurons are poorly understood. The sharp tuning of A1 neurons' excitatory receptive fields (RFs) provides a partial explanation of the above properties. However, it remains unclear how inhibitory components of RFs contribute to the selectivity of A1 neurons observed in awake animals. To examine the role of the inhibition in sharpening stimulus selectivity, we have quantitatively analyzed stimulus-induced suppressive effects over populations of single neurons in frequency, amplitude, and time in A1 of awake marmosets. In addition to the well documented short-latency side-band suppression elicited by masking tones around the best frequency (BF) of a neuron, we uncovered long-latency suppressions caused by single-tone stimulation. Such long-latency suppressions also included monotonically increasing suppression with sound level both on-BF and off-BF, and persistent suppression lasting up to 100 ms after stimulus offset in a substantial proportion of A1 neurons. The extent of the suppression depended on the shape of a neuron's frequency-response area ("O" or "V" shaped). These findings suggest that the excitatory RF of A1 neurons is cocooned by wide-ranging inhibition that contributes to the high selectivity in A1 neurons' responses to complex stimuli. Population sparseness of the tone-responsive A1 neuron population may also be a consequence of this pervasive inhibition.
Collapse
|
36
|
Pienkowski M, Eggermont JJ. Intermittent exposure with moderate-level sound impairs central auditory function of mature animals without concomitant hearing loss. Hear Res 2010; 261:30-5. [DOI: 10.1016/j.heares.2009.12.025] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2009] [Revised: 12/16/2009] [Accepted: 12/18/2009] [Indexed: 11/25/2022]
|
37
|
Context dependence of spectro-temporal receptive fields with implications for neural coding. Hear Res 2010; 271:123-32. [PMID: 20123121 DOI: 10.1016/j.heares.2010.01.014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 01/25/2010] [Accepted: 01/27/2010] [Indexed: 11/23/2022]
Abstract
The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed.
Collapse
|
38
|
Pienkowski M, Eggermont JJ. Nonlinear cross-frequency interactions in primary auditory cortex spectrotemporal receptive fields: a Wiener-Volterra analysis. J Comput Neurosci 2010; 28:285-303. [PMID: 20072806 DOI: 10.1007/s10827-009-0209-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2009] [Revised: 12/11/2009] [Accepted: 12/22/2009] [Indexed: 11/28/2022]
Abstract
The effects of nonlinear interactions between different sound frequencies on the responses of neurons in primary auditory cortex (AI) have only been investigated using two-tone paradigms. Here we stimulated with relatively dense, Poisson-distributed trains of tone pips (with frequency ranges spanning five octaves, 16 frequencies /octave, and mean rates of 20 or 120 pips /s), and examined within-frequency (or auto-frequency) and cross-frequency interactions in three types of AI unit responses by computing second-order "Poisson-Wiener" auto- and cross-kernels. Units were classified on the basis of their spectrotemporal receptive fields (STRFs) as "double-peaked", "single-peaked" or "peak-valley". Second-order interactions were investigated between the two bands of excitatory frequencies on double-peaked STRFs, between an excitatory band and various non-excitatory bands on single-peaked STRFs, and between an excitatory band and an inhibitory sideband on peak-valley STRFs. We found that auto-frequency interactions (i.e., those within a single excitatory band) were always characterized by a strong depression of (first-order) excitation that decayed with the interstimulus lag up to approximately 200 ms. That depression was weaker in cross-frequency compared to auto-frequency interactions for approximately 25% of dual-peaked STRFs, evidence of "combination sensitivity" for the two bands. Non-excitatory and inhibitory frequencies (on single-peaked and peak-valley STRFs, respectively) typically weakly depressed the excitatory response at short interstimulus lags (<50 ms), but weakly facilitated it at longer lags ( approximately 50-200 ms). Both the depression and especially the facilitation were stronger for interactions with inhibitory frequencies rather than just non-excitatory ones. Finally, facilitation in single-peaked and peak-valley units decreased with increasing stimulus density. Our results indicate that the strong combination sensitivity and cross-frequency facilitation suggested by previous two-tone-paradigm studies are much less pronounced when using more temporally-dense stimuli.
Collapse
Affiliation(s)
- Martin Pienkowski
- Department of Physiology and Pharmacology, University of Calgary, Calgary, AB, Canada
| | | |
Collapse
|
39
|
Abstract
Spectrotemporal receptive fields of nonlinear neurons in primary auditory cortex are stimulus dependent or context dependent. Here we show that a variant of stimulus-specific adaptation also contributes to this context dependence. Responses to sound stimulus frequencies close to the neuron's best frequency adapt with an average time constant of approximately 7 s. In contrast, responses away from the best frequency do not adapt, but in fact slightly increase over our 30-s observation window. Such stimulus-specific adaptation could function in enhancing stimulus discrimination and in maximizing neural information transmission by reducing redundancy. It also needs to be taken into account when comparing spectrotemporal receptive fields measured under adapted and nonadapted conditions.
Collapse
|
40
|
Robinson BL, McAlpine D. Gain control mechanisms in the auditory pathway. Curr Opin Neurobiol 2009; 19:402-7. [DOI: 10.1016/j.conb.2009.07.006] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2009] [Revised: 07/08/2009] [Accepted: 07/09/2009] [Indexed: 10/20/2022]
|
41
|
Rapid synaptic depression explains nonlinear modulation of spectro-temporal tuning in primary auditory cortex by natural stimuli. J Neurosci 2009; 29:3374-86. [PMID: 19295144 DOI: 10.1523/jneurosci.5249-08.2009] [Citation(s) in RCA: 124] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In this study, we explored ways to account more accurately for responses of neurons in primary auditory cortex (A1) to natural sounds. The auditory cortex has evolved to extract behaviorally relevant information from complex natural sounds, but most of our understanding of its function is derived from experiments using simple synthetic stimuli. Previous neurophysiological studies have found that existing models, such as the linear spectro-temporal receptive field (STRF), fail to capture the entire functional relationship between natural stimuli and neural responses. To study this problem, we compared STRFs for A1 neurons estimated using a natural stimulus, continuous speech, with STRFs estimated using synthetic ripple noise. For about one-third of the neurons, we found significant differences between STRFs, usually in the temporal dynamics of inhibition and/or overall gain. This shift in tuning resulted primarily from differences in the coarse temporal structure of the speech and noise stimuli. Using simulations, we found that the stimulus dependence of spectro-temporal tuning can be explained by a model in which synaptic inputs to A1 neurons are susceptible to rapid nonlinear depression. This dynamic reshaping of spectro-temporal tuning suggests that synaptic depression may enable efficient encoding of natural auditory stimuli.
Collapse
|
42
|
Increasing spectrotemporal sound density reveals an octave-based organization in cat primary auditory cortex. J Neurosci 2008; 28:8885-96. [PMID: 18768682 DOI: 10.1523/jneurosci.2693-08.2008] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Auditory neurons are likely adapted to process complex stimuli, such as vocalizations, which contain spectrotemporal modulations. However, basic properties of auditory neurons are often derived from tone pips presented in isolation, which lack spectrotemporal modulations. In this context, it is unclear how to deduce the functional role of auditory neurons from their tone pip-derived tuning properties. In this study, spectrotemporal receptive fields (STRFs) were obtained from responses to multi-tone stimulus ensembles differing in their average spectrotemporal density (i.e., number of tone pips per second). STRFs for different stimulus densities were derived from multiple single-unit activity (MUA) and local field potentials (LFPs), simultaneously recorded in primary auditory cortex of cats. Consistent with earlier studies, we found that the spectral bandwidth was narrower for MUA compared with LFPs. Both neural firing rate and LFP amplitude were reduced when the density of the stimulus ensemble increased. Surprisingly, we found that increasing the spectrotemporal sound density revealed with increasing clarity an over-representation of response peaks at frequencies of approximately 3, 5, 10, and 20 kHz, in both MUA- and LFP-derived STRFs. Although the decrease in spectral bandwidth and neural activity with increasing stimulus density can likely be accounted for by forward suppression, the mechanisms underlying the over-representation of the octave-spaced response peaks are unclear. Plausibly, the over-representation may be a functional correlate of the periodic pattern of corticocortical connections observed along the tonotopic axis of cat auditory cortex.
Collapse
|
43
|
Gourévitch B, Noreña A, Shaw G, Eggermont JJ. Spectrotemporal receptive fields in anesthetized cat primary auditory cortex are context dependent. ACTA ACUST UNITED AC 2008; 19:1448-61. [PMID: 18854580 DOI: 10.1093/cercor/bhn184] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In order to investigate how the auditory scene is analyzed and perceived, auditory spectrotemporal receptive fields (STRFs) are generally used as a convenient way to describe how frequency and temporal sound information is encoded. However, using broadband sounds to estimate STRFs imperfectly reflects the way neurons process complex stimuli like conspecific vocalizations insofar as natural sounds often show limited bandwidth. Using recordings in the primary auditory cortex of anesthetized cats, we show that presentation of narrowband stimuli not including the best frequency of neurons provokes the appearance of residual peaks and increased firing rate at some specific spectral edges of stimuli compared with classical STRFs obtained from broadband stimuli. This result is the same for STRFs obtained from both spikes and local field potentials. Potential mechanisms likely involve release from inhibition. We thus emphasize some aspects of context dependency of STRFs, that is, how the balance of inhibitory and excitatory inputs is able to shape the neural response from the spectral content of stimuli.
Collapse
Affiliation(s)
- Boris Gourévitch
- Department of Physiology and Biophysics, University of Calgary, Calgary, Alberta, Canada
| | | | | | | |
Collapse
|
44
|
Gourévitch B, Eggermont JJ. Spectro-temporal sound density-dependent long-term adaptation in cat primary auditory cortex. Eur J Neurosci 2008; 27:3310-21. [PMID: 18598269 DOI: 10.1111/j.1460-9568.2008.06265.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Sensory systems use adaptive strategies to code for the changing environment on different time scales. Short-term adaptation (up to 100 ms) reflects mostly synaptic suppression mechanisms after response to a stimulus. Long-term adaptation (up to a few seconds) is reflected in the habituation of neuronal responses to constant stimuli. Very long-term adaptation (several weeks) can lead to plastic changes in the cortex, most often facilitated during early development, by stimulus relevance or by behavioral states such as attention. In this study, we show that long-term adaptation with a time course of tens of minutes is detectable in anesthetized adult cat auditory cortex after a few minutes of listening to random-frequency tone pips. After the initial post-onset suppression, a slow recovery of the neuronal response strength to tones at or near their best frequency was observed for low-rate random sounds (four pips per octave per second) during stimulation. The firing rate at the end of stimulation (15 min) reached levels close to that observed during the initial onset response. The effect, visible for both spikes and, to a smaller extent, local field potentials, decreased with increasing spectro-temporal density of the sound. The spectro-temporal density of sound may therefore be of particular relevance in cortical processing. Our findings suggest that low stimulus rates may produce a specific acoustic environment that shapes the primary auditory cortex through very different processing than for spectro-temporally more dense and complex sounds.
Collapse
Affiliation(s)
- Boris Gourévitch
- Department of Physiology and Biophysics, Department of Psychology, University of Calgary, Calgary, Alberta, Canada
| | | |
Collapse
|
45
|
Lesica NA, Grothe B. Dynamic spectrotemporal feature selectivity in the auditory midbrain. J Neurosci 2008; 28:5412-21. [PMID: 18495875 PMCID: PMC6670618 DOI: 10.1523/jneurosci.0073-08.2008] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2008] [Revised: 03/31/2008] [Accepted: 04/06/2008] [Indexed: 11/21/2022] Open
Abstract
The transformation of auditory information from the cochlea to the cortex is a highly nonlinear process. Studies using tone stimuli have revealed that changes in even the most basic parameters of the auditory stimulus can alter neural response properties; for example, a change in stimulus intensity can cause a shift in a neuron's preferred frequency. However, it is not yet clear how such nonlinearities contribute to the processing of spectrotemporal features in complex sounds. Here, we use spectrotemporal receptive fields (STRFs) to characterize the effects of stimulus intensity on feature selectivity in the mammalian inferior colliculus (IC). At low intensities, we find that STRFs are relatively simple, typically consisting of a single excitatory region, indicating that the neural response is simply a reflection of the stimulus amplitude at the preferred frequency. In contrast, we find that STRFs at high intensities typically consist of a combination of an excitatory region and one or more inhibitory regions, often in a spectrotemporally inseparable arrangement, indicating selectivity for complex auditory features. We show that a linear-nonlinear model with the appropriate STRF can predict neural responses to stimuli with a fixed intensity, and we demonstrate that a simple extension of the model with an intensity-dependent STRF can predict responses to stimuli with varying intensity. These results illustrate the complexity of auditory feature selectivity in the IC, but also provide encouraging evidence that the prediction of nonlinear responses to complex stimuli is a tractable problem.
Collapse
Affiliation(s)
- Nicholas A Lesica
- Department of Biology II, Ludwig-Maximilians-University Munich, 82152 Martinsried, Germany.
| | | |
Collapse
|
46
|
Nonlinearities and contextual influences in auditory cortical responses modeled with multilinear spectrotemporal methods. J Neurosci 2008; 28:1929-42. [PMID: 18287509 DOI: 10.1523/jneurosci.3377-07.2008] [Citation(s) in RCA: 123] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The relationship between a sound and its neural representation in the auditory cortex remains elusive. Simple measures such as the frequency response area or frequency tuning curve provide little insight into the function of the auditory cortex in complex sound environments. Spectrotemporal receptive field (STRF) models, despite their descriptive potential, perform poorly when used to predict auditory cortical responses, showing that nonlinear features of cortical response functions, which are not captured by STRFs, are functionally important. We introduce a new approach to the description of auditory cortical responses, using multilinear modeling methods. These descriptions simultaneously account for several nonlinearities in the stimulus-response functions of auditory cortical neurons, including adaptation, spectral interactions, and nonlinear sensitivity to sound level. The models reveal multiple inseparabilities in cortical processing of time lag, frequency, and sound level, and suggest functional mechanisms by which auditory cortical neurons are sensitive to stimulus context. By explicitly modeling these contextual influences, the models are able to predict auditory cortical responses more accurately than are STRF models. In addition, they can explain some forms of stimulus dependence in STRFs that were previously poorly understood.
Collapse
|
47
|
The consequences of response nonlinearities for interpretation of spectrotemporal receptive fields. J Neurosci 2008; 28:446-55. [PMID: 18184787 DOI: 10.1523/jneurosci.1775-07.2007] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Neurons in the central auditory system are often described by the spectrotemporal receptive field (STRF), conventionally defined as the best linear fit between the spectrogram of a sound and the spike rate it evokes. An STRF is often assumed to provide an estimate of the receptive field of a neuron, i.e., the spectral and temporal range of stimuli that affect the response. However, when the true stimulus-response function is nonlinear, the STRF will be stimulus dependent, and changes in the stimulus properties can alter estimates of the sign and spectrotemporal extent of receptive field components. We demonstrate analytically and in simulations that, even when uncorrelated stimuli are used, interactions between simple neuronal nonlinearities and higher-order structure in the stimulus can produce STRFs that show contributions from time-frequency combinations to which the neuron is actually insensitive. Only when spectrotemporally independent stimuli are used does the STRF reliably indicate features of the underlying receptive field, and even then it provides only a conservative estimate. One consequence of these observations, illustrated using natural stimuli, is that a stimulus-induced change in an STRF could arise from a consistent but nonlinear neuronal response to stimulus ensembles with differing higher-order dependencies. Thus, although the responses of higher auditory neurons may well involve adaptation to the statistics of different stimulus ensembles, stimulus dependence of STRFs alone, or indeed of any overly constrained stimulus-response mapping, cannot demonstrate the nature or magnitude of such effects.
Collapse
|
48
|
Multi-frequency auditory stimulation disrupts spindling activity in anesthetized animals. Neuroscience 2008; 151:888-900. [DOI: 10.1016/j.neuroscience.2007.11.028] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2007] [Revised: 10/16/2007] [Accepted: 12/06/2007] [Indexed: 11/17/2022]
|
49
|
Atencio CA, Blake DT, Strata F, Cheung SW, Merzenich MM, Schreiner CE. Frequency-modulation encoding in the primary auditory cortex of the awake owl monkey. J Neurophysiol 2007; 98:2182-95. [PMID: 17699695 DOI: 10.1152/jn.00394.2007] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Many communication sounds, such as New World monkey twitter calls, contain frequency-modulated (FM) sweeps. To determine how this prominent vocalization element is represented in the auditory cortex we examined neural responses to logarithmic FM sweep stimuli in the primary auditory cortex (AI) of two awake owl monkeys. Using an implanted array of microelectrodes we quantitatively characterized neuronal responses to FM sweeps and to random tone-pip stimuli. Tone-pip responses were used to construct spectrotemporal receptive fields (STRFs). Classification of FM sweep responses revealed few neurons with high direction and speed selectivity. Most neurons responded to sweeps in both directions and over a broad range of sweep speeds. Characteristic frequency estimates from FM responses were highly correlated with estimates from STRFs, although spectral receptive field bandwidth was consistently underestimated by FM stimuli. Predictions of FM direction selectivity and best speed from STRFs were significantly correlated with observed FM responses, although some systematic discrepancies existed. Last, the population distributions of FM responses in the awake owl monkey were similar to, although of longer temporal duration than, those in the anesthetized squirrel monkeys.
Collapse
Affiliation(s)
- Craig A Atencio
- Bioengineering Graduate Group, University of California San Francisco, San Francisco, CA 94143-0732, USA.
| | | | | | | | | | | |
Collapse
|
50
|
Shechter B, Depireux DA. Stability of spectro-temporal tuning over several seconds in primary auditory cortex of the awake ferret. Neuroscience 2007; 148:806-14. [PMID: 17693032 PMCID: PMC2039872 DOI: 10.1016/j.neuroscience.2007.06.027] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2007] [Revised: 06/07/2007] [Accepted: 06/27/2007] [Indexed: 11/25/2022]
Abstract
The steady-state spectro-temporal tuning of auditory cortical cells has been studied using a variety of broadband stimuli that characterize neurons by their steady-state responses to long duration stimuli, lasting from about a second to several minutes. Central sensory stations are thought to adapt in their response to stimuli presented over extended periods of time. For instance, we have previously shown that auditory cortical neurons display a second order of adaptation, whereby the rate of their adaptation to the repeated presentation of fixed alternating stimuli decreases with each presentation. The auditory grating (or ripple) method of characterizing central auditory neurons, and its extensions, have proven very effective. But these stimuli are typically used with spectro-temporal content held fixed over time-scales of seconds, introducing the possibility of rapid adaptation while the receptive field is being measured, whereas the neural response used to compute a spectro-temporal receptive field (STRF) assumes stationarity in the neural input/output function. We demonstrate dynamic changes in some parameters during the measurement of the STRF over a period of seconds, even absent of a relevant behavioral task. Specifically, we find in the primary auditory cortex of the awake ferret, small but systematic changes in duration and breadth of tuning of STRFs when comparing the early (0.25-1.75 s) and late (4.5-6 s) segments of the responses to these stimuli.
Collapse
Affiliation(s)
- B Shechter
- Department of Anatomy and Neurobiology, School of Medicine, University of Maryland, Baltimore, MD 21201, USA
| | | |
Collapse
|