1
|
Reilly J, Goodwin JD, Lu S, Kozlov AS. Bidirectional generative adversarial representation learning for natural stimulus synthesis. J Neurophysiol 2024; 132:1156-1169. [PMID: 39196986 PMCID: PMC11495180 DOI: 10.1152/jn.00421.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 08/12/2024] [Accepted: 08/14/2024] [Indexed: 08/30/2024] Open
Abstract
Thousands of species use vocal signals to communicate with one another. Vocalizations carry rich information, yet characterizing and analyzing these complex, high-dimensional signals is difficult and prone to human bias. Moreover, animal vocalizations are ethologically relevant stimuli whose representation by auditory neurons is an important subject of research in sensory neuroscience. A method that can efficiently generate naturalistic vocalization waveforms would offer an unlimited supply of stimuli with which to probe neuronal computations. Although unsupervised learning methods allow for the projection of vocalizations into low-dimensional latent spaces learned from the waveforms themselves, and generative modeling allows for the synthesis of novel vocalizations for use in downstream tasks, we are not aware of any model that combines these tasks to synthesize naturalistic vocalizations in the waveform domain for stimulus playback. In this paper, we demonstrate BiWaveGAN: a bidirectional generative adversarial network (GAN) capable of learning a latent representation of ultrasonic vocalizations (USVs) from mice. We show that BiWaveGAN can be used to generate, and interpolate between, realistic vocalization waveforms. We then use these synthesized stimuli along with natural USVs to probe the sensory input space of mouse auditory cortical neurons. We show that stimuli generated from our method evoke neuronal responses as effectively as real vocalizations, and produce receptive fields with the same predictive power. BiWaveGAN is not restricted to mouse USVs but can be used to synthesize naturalistic vocalizations of any animal species and interpolate between vocalizations of the same or different species, which could be useful for probing categorical boundaries in representations of ethologically relevant auditory signals.NEW & NOTEWORTHY A new type of artificial neural network is presented that can be used to generate animal vocalization waveforms and interpolate between them to create new vocalizations. We find that our synthetic naturalistic stimuli drive auditory cortical neurons in the mouse equally well and produce receptive field features with the same predictive power as those obtained with natural mouse vocalizations, confirming the quality of the stimuli produced by the neural network.
Collapse
Affiliation(s)
- Johnny Reilly
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - John D Goodwin
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Sihao Lu
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Andriy S Kozlov
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
2
|
Homma NY, See JZ, Atencio CA, Hu C, Downer JD, Beitel RE, Cheung SW, Najafabadi MS, Olsen T, Bigelow J, Hasenstaub AR, Malone BJ, Schreiner CE. Receptive-field nonlinearities in primary auditory cortex: a comparative perspective. Cereb Cortex 2024; 34:bhae364. [PMID: 39270676 PMCID: PMC11398879 DOI: 10.1093/cercor/bhae364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/14/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
Cortical processing of auditory information can be affected by interspecies differences as well as brain states. Here we compare multifeature spectro-temporal receptive fields (STRFs) and associated input/output functions or nonlinearities (NLs) of neurons in primary auditory cortex (AC) of four mammalian species. Single-unit recordings were performed in awake animals (female squirrel monkeys, female, and male mice) and anesthetized animals (female squirrel monkeys, rats, and cats). Neuronal responses were modeled as consisting of two STRFs and their associated NLs. The NLs for the STRF with the highest information content show a broad distribution between linear and quadratic forms. In awake animals, we find a higher percentage of quadratic-like NLs as opposed to more linear NLs in anesthetized animals. Moderate sex differences of the shape of NLs were observed between male and female unanesthetized mice. This indicates that the core AC possesses a rich variety of potential computations, particularly in awake animals, suggesting that multiple computational algorithms are at play to enable the auditory system's robust recognition of auditory events.
Collapse
Affiliation(s)
- Natsumi Y Homma
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, UK
| | - Jermyn Z See
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Craig A Atencio
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Congcong Hu
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Joshua D Downer
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Ralph E Beitel
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Steven W Cheung
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Mina Sadeghi Najafabadi
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Timothy Olsen
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - James Bigelow
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Andrea R Hasenstaub
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Brian J Malone
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Christoph E Schreiner
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
3
|
Mai A, Riès S, Ben-Haim S, Shih JJ, Gentner TQ. Acoustic and language-specific sources for phonemic abstraction from speech. Nat Commun 2024; 15:677. [PMID: 38263364 PMCID: PMC10805762 DOI: 10.1038/s41467-024-44844-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 01/03/2024] [Indexed: 01/25/2024] Open
Abstract
Spoken language comprehension requires abstraction of linguistic information from speech, but the interaction between auditory and linguistic processing of speech remains poorly understood. Here, we investigate the nature of this abstraction using neural responses recorded intracranially while participants listened to conversational English speech. Capitalizing on multiple, language-specific patterns where phonological and acoustic information diverge, we demonstrate the causal efficacy of the phoneme as a unit of analysis and dissociate the unique contributions of phonemic and spectrographic information to neural responses. Quantitive higher-order response models also reveal that unique contributions of phonological information are carried in the covariance structure of the stimulus-response relationship. This suggests that linguistic abstraction is shaped by neurobiological mechanisms that involve integration across multiple spectro-temporal features and prior phonological information. These results link speech acoustics to phonology and morphosyntax, substantiating predictions about abstractness in linguistic theory and providing evidence for the acoustic features that support that abstraction.
Collapse
Affiliation(s)
- Anna Mai
- University of California, San Diego, Linguistics, 9500 Gilman Dr., La Jolla, CA, 92093, USA.
| | - Stephanie Riès
- San Diego State University, School of Speech, Language, and Hearing Sciences, 5500 Campanile Drive, San Diego, CA, 92182, USA
- San Diego State University, Center for Clinical and Cognitive Sciences, 5500 Campanile Drive, San Diego, CA, 92182, USA
| | - Sharona Ben-Haim
- University of California, San Diego, Neurological Surgery, 9500 Gilman Dr., La Jolla, CA, 92093, USA
| | - Jerry J Shih
- University of California, San Diego, Neurosciences, 9500 Gilman Dr., La Jolla, CA, 92093, USA
| | - Timothy Q Gentner
- University of California, San Diego, Psychology, 9500 Gilman Dr., La Jolla, CA, 92093, USA
- University of California, San Diego, Neurobiology, 9500 Gilman Dr., La Jolla, CA, 92093, USA
- University of California, San Diego, Kavli Institute for Brain and Mind, 9500 Gilman Dr., La Jolla, CA, 92093, USA
| |
Collapse
|
4
|
López Espejo M, David SV. A sparse code for natural sound context in auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 6:100118. [PMID: 38152461 PMCID: PMC10749876 DOI: 10.1016/j.crneur.2023.100118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 10/27/2023] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
Accurate sound perception can require integrating information over hundreds of milliseconds or even seconds. Spectro-temporal models of sound coding by single neurons in auditory cortex indicate that the majority of sound-evoked activity can be attributed to stimuli with a few tens of milliseconds. It remains uncertain how the auditory system integrates information about sensory context on a longer timescale. Here we characterized long-lasting contextual effects in auditory cortex (AC) using a diverse set of natural sound stimuli. We measured context effects as the difference in a neuron's response to a single probe sound following two different context sounds. Many AC neurons showed context effects lasting longer than the temporal window of a traditional spectro-temporal receptive field. The duration and magnitude of context effects varied substantially across neurons and stimuli. This diversity of context effects formed a sparse code across the neural population that encoded a wider range of contexts than any constituent neuron. Encoding model analysis indicates that context effects can be explained by activity in the local neural population, suggesting that recurrent local circuits support a long-lasting representation of sensory context in auditory cortex.
Collapse
Affiliation(s)
- Mateo López Espejo
- Neuroscience Graduate Program, Oregon Health & Science University, Portland, OR, USA
| | - Stephen V. David
- Otolaryngology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
5
|
Lu S, Ang GW, Steadman M, Kozlov AS. Composite receptive fields in the mouse auditory cortex. J Physiol 2023; 601:4091-4104. [PMID: 37578817 PMCID: PMC10952747 DOI: 10.1113/jp285003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/12/2023] [Indexed: 08/15/2023] Open
Abstract
A central question in sensory neuroscience is how neurons represent complex natural stimuli. This process involves multiple steps of feature extraction to obtain a condensed, categorical representation useful for classification and behaviour. It has previously been shown that central auditory neurons in the starling have composite receptive fields composed of multiple features. Whether this property is an idiosyncratic characteristic of songbirds, a group of highly specialized vocal learners or a generic property of sensory processing is unknown. To address this question, we have recorded responses from auditory cortical neurons in mice, and characterized their receptive fields using mouse ultrasonic vocalizations (USVs) as a natural and ethologically relevant stimulus and pitch-shifted starling songs as a natural but ethologically irrelevant control stimulus. We have found that these neurons display composite receptive fields with multiple excitatory and inhibitory subunits. Moreover, this was the case with either the conspecific or the heterospecific vocalizations. We then trained the sparse filtering algorithm on both classes of natural stimuli to obtain statistically optimal features, and compared the natural and artificial features using UMAP, a dimensionality-reduction algorithm previously used to analyse mouse USVs and birdsongs. We have found that the receptive-field features obtained with both types of the natural stimuli clustered together, as did the sparse-filtering features. However, the natural and artificial receptive-field features clustered mostly separately. Based on these results, our general conclusion is that composite receptive fields are not a unique characteristic of specialized vocal learners but are likely a generic property of central auditory systems. KEY POINTS: Auditory cortical neurons in the mouse have composite receptive fields with several excitatory and inhibitory features. Receptive-field features capture temporal and spectral modulations of natural stimuli. Ethological relevance of the stimulus affects the estimation of receptive-field dimensionality.
Collapse
Affiliation(s)
- Sihao Lu
- Department of BioengineeringImperial College LondonLondonUK
| | - Grace W.Y. Ang
- Department of BioengineeringImperial College LondonLondonUK
| | - Mark Steadman
- Department of BioengineeringImperial College LondonLondonUK
| | | |
Collapse
|
6
|
Homma NY, Atencio CA, Schreiner CE. Plasticity of Multidimensional Receptive Fields in Core Rat Auditory Cortex Directed by Sound Statistics. Neuroscience 2021; 467:150-170. [PMID: 33951506 DOI: 10.1016/j.neuroscience.2021.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 04/09/2021] [Accepted: 04/24/2021] [Indexed: 11/17/2022]
Abstract
Sensory cortical neurons can nonlinearly integrate a wide range of inputs. The outcome of this nonlinear process can be approximated by more than one receptive field component or filter to characterize the ensuing stimulus preference. The functional properties of multidimensional filters are, however, not well understood. Here we estimated two spectrotemporal receptive fields (STRFs) per neuron using maximally informative dimension analysis. We compared their temporal and spectral modulation properties and determined the stimulus information captured by the two STRFs in core rat auditory cortical fields, primary auditory cortex (A1) and ventral auditory field (VAF). The first STRF is the dominant filter and acts as a sound feature detector in both fields. The second STRF is less feature specific, preferred lower modulations, and had less spike information compared to the first STRF. The information jointly captured by the two STRFs was larger than that captured by the sum of the individual STRFs, reflecting nonlinear interactions of two filters. This information gain was larger in A1. We next determined how the acoustic environment affects the structure and relationship of these two STRFs. Rats were exposed to moderate levels of spectrotemporally modulated noise during development. Noise exposure strongly altered the spectrotemporal preference of the first STRF in both cortical fields. The interaction between the two STRFs was reduced by noise exposure in A1 but not in VAF. The results reveal new functional distinctions between A1 and VAF indicating that (i) A1 has stronger interactions of the two STRFs than VAF, (ii) noise exposure diminishes modulation parameter representation contained in the noise more strongly for the first STRF in both fields, and (iii) plasticity induced by noise exposure can affect the strength of filter interactions in A1. Taken together, ascertaining two STRFs per neuron enhances the understanding of cortical information processing and plasticity effects in core auditory cortex.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA.
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA
| |
Collapse
|
7
|
Shih JY, Yuan K, Atencio CA, Schreiner CE. Distinct Manifestations of Cooperative, Multidimensional Stimulus Representations in Different Auditory Forebrain Stations. Cereb Cortex 2020; 30:3130-3147. [PMID: 32047882 DOI: 10.1093/cercor/bhz299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Classic spectrotemporal receptive fields (STRFs) for auditory neurons are usually expressed as a single linear filter representing a single encoded stimulus feature. Multifilter STRF models represent the stimulus-response relationship of primary auditory cortex (A1) neurons more accurately because they can capture multiple stimulus features. To determine whether multifilter processing is unique to A1, we compared the utility of single-filter versus multifilter STRF models in the medial geniculate body (MGB), anterior auditory field (AAF), and A1 of ketamine-anesthetized cats. We estimated STRFs using both spike-triggered average (STA) and maximally informative dimension (MID) methods. Comparison of basic filter properties of first maximally informative dimension (MID1) and second maximally informative dimension (MID2) in the 3 stations revealed broader spectral integration of MID2s in MGBv and A1 as opposed to AAF. MID2 peak latency was substantially longer than for STAs and MID1s in all 3 stations. The 2-filter MID model captured more information and yielded better predictions in many neurons from all 3 areas but disproportionately more so in AAF and A1 compared with MGBv. Significantly, information-enhancing cooperation between the 2 MIDs was largely restricted to A1 neurons. This demonstrates significant differences in how these 3 forebrain stations process auditory information, as expressed in effective and synergistic multifilter processing.
Collapse
Affiliation(s)
- Jonathan Y Shih
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| | - Kexin Yuan
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Craig A Atencio
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| | - Christoph E Schreiner
- Department of Otolaryngology-Head and Neck Surgery, Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California, San Francisco, CA 94158-0444, USA
| |
Collapse
|
8
|
Sharpee TO, Berkowitz JA. Linking neural responses to behavior with information-preserving population vectors. Curr Opin Behav Sci 2019; 29:37-44. [PMID: 36590862 PMCID: PMC9802663 DOI: 10.1016/j.cobeha.2019.03.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
All systems for processing signals, both artificial and within animals, must obey fundamental statistical laws for how information can be processed. We discuss here recent results using information theory that provide a blueprint for building circuits where signals can be read-out without information loss. Many properties that are necessary to build information-preserving circuits are actually observed in real neurons, at least approximately. One such property is the use of logistic nonlinearity for relating inputs to neural response probability. Such nonlinearities are common in neural and intracellular networks. With this nonlinearity type, there is a linear combination of neural responses that is guaranteed to preserve Shannon information contained in the response of a neural population, no matter how many neurons it contains. This read-out measure is related to a classic quantity known as the population vector that has been quite successful in relating neural responses to animal behavior in a wide variety of cases. Nevertheless, the population vector did not withstand the scrutiny of detailed information-theoretical analyses that showed that it discards substantial amounts of information contained in the responses of a neural population. We discuss recent theoretical results showing how to modify the population vector expression to make it 'information-preserving', and what is necessary in terms of neural circuit organization to allow for lossless information transfer. Implementing these strategies within artificial systems is likely to increase their efficiency, especially for brain-machine interfaces.
Collapse
|
9
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
10
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|