1
|
Atilgan H, Walker KM, King AJ, Schnupp JW, Bizley JK. Auditory Training Alters the Cortical Representation of Complex Sounds. J Neurosci 2025; 45:e0989242025. [PMID: 40180572 PMCID: PMC12044038 DOI: 10.1523/jneurosci.0989-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 02/24/2025] [Accepted: 03/03/2025] [Indexed: 04/05/2025] Open
Abstract
Auditory learning is supported by long-term changes in the neural processing of sound. We examined these task-depend changes in the auditory cortex by mapping neural sensitivity to timbre, pitch, and location cues in cues in trained (n = 5) and untrained control female ferrets (n = 5). Trained animals either identified vowels in a two-alternative forced choice task (n = 3) or discriminated when a repeating vowel changed in identity or pitch (n = 2). Neural responses were recorded under anesthesia in two primary auditory cortical fields and two tonotopically organized nonprimary fields. In trained animals, the overall sensitivity to sound timbre was reduced across three cortical fields compared with control animals, but maintained in a nonprimary field (the posterior pseudosylvian field). While training did not increase sensitivity to timbre across the auditory cortex, it did change the way in which neurons integrated spectral information, with neural responses in trained animals increasing their sensitivity to first and second formant frequencies, whereas in control animals cortical sensitivity to spectral timbre depended mostly on the second formant. Animals trained on timbre identification were required to generalize across pitch when discriminating timbre, and their neurons became less modulated by fundamental frequency relative to control animals. Finally, both trained groups showed increased spatial sensitivity and an enhanced response to sound source locations close to the midline, where the loudspeaker was located in the training chamber. These results demonstrate that training elicited widespread alterations in the cortical representation of complex sounds.
Collapse
Affiliation(s)
- Huriye Atilgan
- The Ear Institute, University College London, London WC1X 8EE, United Kingdom
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom
| | - Kerry M Walker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom
| | - Jan W Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom
- Gerald Choa Neuroscience Institute, The Chinese University of Hong Kong, Hong Kong, Sha Tin
| | - Jennifer K Bizley
- The Ear Institute, University College London, London WC1X 8EE, United Kingdom
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom
| |
Collapse
|
2
|
Conway M, Oncul M, Allen K, Zhang Z, Johnston J. Perceptual constancy for an odor is acquired through changes in primary sensory neurons. SCIENCE ADVANCES 2024; 10:eado9205. [PMID: 39661686 PMCID: PMC11633753 DOI: 10.1126/sciadv.ado9205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 11/04/2024] [Indexed: 12/13/2024]
Abstract
The ability to consistently recognize an object despite variable sensory input is termed perceptual constancy. This ability is not innate; rather, it develops with experience early in life. We show that, when mice are naïve to an odor object, perceptual constancy is absent across increasing concentrations. The perceptual change coincides with a rapid reduction in activity from a single olfactory receptor channel that is most sensitive to the odor. This drop in activity is not a property of circuit interactions within the olfactory bulb; instead, it is due to a sensitivity mismatch of olfactory receptor neurons within the nose. We show that, after forming an association of this odor with food, the sensitivity of the receptor channel is matched to the odor object, preventing transmission failure and promoting perceptual stability. These data show that plasticity of the primary sensory organ enables learning of perceptual constancy.
Collapse
Affiliation(s)
- Mark Conway
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - Merve Oncul
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - Kate Allen
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - Zongqian Zhang
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - Jamie Johnston
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| |
Collapse
|
3
|
Chen C, Song S. Distinct Neuron Types Contribute to Hybrid Auditory Spatial Coding. J Neurosci 2024; 44:e0159242024. [PMID: 39261006 PMCID: PMC11502229 DOI: 10.1523/jneurosci.0159-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 07/20/2024] [Accepted: 07/28/2024] [Indexed: 09/13/2024] Open
Abstract
Neural decoding is a tool for understanding how activities from a population of neurons inside the brain relate to the outside world and for engineering applications such as brain-machine interfaces. However, neural decoding studies mainly focused on different decoding algorithms rather than different neuron types which could use different coding strategies. In this study, we used two-photon calcium imaging to assess three auditory spatial decoders (space map, opponent channel, and population pattern) in excitatory and inhibitory neurons in the dorsal inferior colliculus of male and female mice. Our findings revealed a clustering of excitatory neurons that prefer similar interaural level difference (ILD), the primary spatial cues in mice, while inhibitory neurons showed random local ILD organization. We found that inhibitory neurons displayed lower decoding variability under the opponent channel decoder, while excitatory neurons achieved higher decoding accuracy under the space map and population pattern decoders. Further analysis revealed that the inhibitory neurons' preference for ILD off the midline and the excitatory neurons' heterogeneous ILD tuning account for their decoding differences. Additionally, we discovered a sharper ILD tuning in the inhibitory neurons. Our computational model, linking this to increased presynaptic inhibitory inputs, was corroborated using monaural and binaural stimuli. Overall, this study provides experimental and computational insight into how excitatory and inhibitory neurons uniquely contribute to the coding of sound locations.
Collapse
Affiliation(s)
- Chenggang Chen
- Tsinghua Laboratory of Brain and Intelligence and School of Biomedical Engineering, McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Sen Song
- Tsinghua Laboratory of Brain and Intelligence and School of Biomedical Engineering, McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Martin A, Souffi S, Huetz C, Edeline JM. Can Extensive Training Transform a Mouse into a Guinea Pig? An Evaluation Based on the Discriminative Abilities of Inferior Colliculus Neurons. BIOLOGY 2024; 13:92. [PMID: 38392310 PMCID: PMC10886615 DOI: 10.3390/biology13020092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
Humans and animals maintain accurate discrimination between communication sounds in the presence of loud sources of background noise. In previous studies performed in anesthetized guinea pigs, we showed that, in the auditory pathway, the highest discriminative abilities between conspecific vocalizations were found in the inferior colliculus. Here, we trained CBA/J mice in a Go/No-Go task to discriminate between two similar guinea pig whistles, first in quiet conditions, then in two types of noise, a stationary noise and a chorus noise at three SNRs. Control mice were passively exposed to the same number of whistles as trained mice. After three months of extensive training, inferior colliculus (IC) neurons were recorded under anesthesia and the responses were quantified as in our previous studies. In quiet, the mean values of the firing rate, the temporal reliability and mutual information obtained from trained mice were higher than from the exposed mice and the guinea pigs. In stationary and chorus noise, there were only a few differences between the trained mice and the guinea pigs; and the lowest mean values of the parameters were found in the exposed mice. These results suggest that behavioral training can trigger plasticity in IC that allows mice neurons to reach guinea pig-like discrimination abilities.
Collapse
Affiliation(s)
- Alexandra Martin
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| |
Collapse
|
5
|
Leonard MK, Gwilliams L, Sellers KK, Chung JE, Xu D, Mischler G, Mesgarani N, Welkenhuysen M, Dutta B, Chang EF. Large-scale single-neuron speech sound encoding across the depth of human cortex. Nature 2024; 626:593-602. [PMID: 38093008 PMCID: PMC10866713 DOI: 10.1038/s41586-023-06839-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 11/06/2023] [Indexed: 01/31/2024]
Abstract
Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Laura Gwilliams
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Kristin K Sellers
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Jason E Chung
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Duo Xu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | | | | | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
6
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. Curr Biol 2023; 33:4470-4483.e7. [PMID: 37802051 PMCID: PMC10665086 DOI: 10.1016/j.cub.2023.09.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/27/2023] [Accepted: 09/13/2023] [Indexed: 10/08/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative-choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in the auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that, in spite of modulation of auditory cortical activity by task priors, the auditory cortex does not represent sufficient information about these priors to exploit them optimally. Thus, the combination of rapidly changing sensory information with more slowly varying task information required for decisions in this task might be represented in brain regions other than the auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA.
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
7
|
Paraouty N, Yao JD, Varnet L, Chou CN, Chung S, Sanes DH. Sensory cortex plasticity supports auditory social learning. Nat Commun 2023; 14:5828. [PMID: 37730696 PMCID: PMC10511464 DOI: 10.1038/s41467-023-41641-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/11/2023] [Indexed: 09/22/2023] Open
Abstract
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance.
Collapse
Affiliation(s)
- Nihaad Paraouty
- Center for Neural Science New York University, New York, NY, 10003, USA.
| | - Justin D Yao
- Department of Otolaryngology, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Léo Varnet
- Laboratoire des Systèmes Perceptifs, UMR 8248, Ecole Normale Supérieure, PSL University, Paris, 75005, France
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
- School of Engineering & Applied Sciences, Harvard University, Cambridge, MA, 02138, USA
| | - SueYeon Chung
- Center for Neural Science New York University, New York, NY, 10003, USA
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Dan H Sanes
- Center for Neural Science New York University, New York, NY, 10003, USA
- Department of Psychology, New York University, New York, NY, 10003, USA
- Department of Biology, New York University, New York, NY, 10003, USA
- Neuroscience Institute, NYU Langone Medical Center, New York, NY, 10003, USA
| |
Collapse
|
8
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.31.526457. [PMID: 37745428 PMCID: PMC10515783 DOI: 10.1101/2023.01.31.526457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that in spite of modulation of auditory cortical activity by task priors, auditory cortex does not represent sufficient information about these priors to exploit them optimally and that decisions in this task require that rapidly changing sensory information be combined with more slowly varying task information extracted and represented in brain regions other than auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: Institute for Quantitative Biosciences, the University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 1130032, Japan
- Present address: Department of Life Sciences, Graduate School of Arts and Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, 1538902, Japan
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: The Francis Crick Institute, 1 Midland Rd, NW1 4AT London, UK
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
9
|
Liu W, Vicario DS. Dynamic encoding of phonetic categories in zebra finch auditory forebrain. Sci Rep 2023; 13:11172. [PMID: 37430030 DOI: 10.1038/s41598-023-37982-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 06/30/2023] [Indexed: 07/12/2023] Open
Abstract
Vocal communication requires the formation of acoustic categories to enable invariant representations of sounds despite superficial variations. Humans form acoustic categories for speech phonemes, enabling the listener to recognize words independent of speakers; animals can also discriminate speech phonemes. We investigated the neural mechanisms of this process using electrophysiological recordings from the zebra finch secondary auditory area, caudomedial nidopallium (NCM), during passive exposure to human speech stimuli consisting of two naturally spoken words produced by multiple speakers. Analysis of neural distance and decoding accuracy showed improvements in neural discrimination between word categories over the course of exposure, and this improved representation transferred to the same words by novel speakers. We conclude that NCM neurons formed generalized representations of word categories independent of speaker-specific variations that became more refined over the course of passive exposure. The discovery of this dynamic encoding process in NCM suggests a general processing mechanism for forming categorical representations of complex acoustic signals that humans share with other animals.
Collapse
Affiliation(s)
- Wanyi Liu
- Department of Psychology, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA.
| | - David S Vicario
- Department of Psychology, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA.
| |
Collapse
|
10
|
Yao JD, Zemlianova KO, Hocker DL, Savin C, Constantinople CM, Chung S, Sanes DH. Transformation of acoustic information to sensory decision variables in the parietal cortex. Proc Natl Acad Sci U S A 2023; 120:e2212120120. [PMID: 36598952 PMCID: PMC9926273 DOI: 10.1073/pnas.2212120120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 11/08/2022] [Indexed: 01/05/2023] Open
Abstract
The process by which sensory evidence contributes to perceptual choices requires an understanding of its transformation into decision variables. Here, we address this issue by evaluating the neural representation of acoustic information in the auditory cortex-recipient parietal cortex, while gerbils either performed a two-alternative forced-choice auditory discrimination task or while they passively listened to identical acoustic stimuli. During task engagement, stimulus identity decoding performance from simultaneously recorded parietal neurons significantly correlated with psychometric sensitivity. In contrast, decoding performance during passive listening was significantly reduced. Principal component and geometric analyses revealed the emergence of low-dimensional encoding of linearly separable manifolds with respect to stimulus identity and decision, but only during task engagement. These findings confirm that the parietal cortex mediates a transition of acoustic representations into decision-related variables. Finally, using a clustering analysis, we identified three functionally distinct subpopulations of neurons that each encoded task-relevant information during separate temporal segments of a trial. Taken together, our findings demonstrate how parietal cortex neurons integrate and transform encoded auditory information to guide sound-driven perceptual decisions.
Collapse
Affiliation(s)
- Justin D. Yao
- Center for Neural Science, New York University, New YorkNY 10003
- Department of Otolaryngology, Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ08901
- Brain Health Institute, Rutgers University, Piscataway, NJ08854
| | | | - David L. Hocker
- Center for Neural Science, New York University, New YorkNY 10003
| | - Cristina Savin
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
- Center for Data Science, New York University, New YorkNY 10011
| | - Christine M. Constantinople
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
| | - SueYeon Chung
- Center for Neural Science, New York University, New YorkNY 10003
- Flatiron Institute, Simons Foundation, New YorkNY 10010
| | - Dan H. Sanes
- Center for Neural Science, New York University, New YorkNY 10003
- Neuroscience Institute, New York University Langone School of Medicine, New York, NY10016
- Department of Psychology, New York University, New YorkNY 10003
- Department of Biology, New York University, New YorkNY 10003
| |
Collapse
|
11
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
12
|
Slonina ZA, Poole KC, Bizley JK. What can we learn from inactivation studies? Lessons from auditory cortex. Trends Neurosci 2021; 45:64-77. [PMID: 34799134 PMCID: PMC8897194 DOI: 10.1016/j.tins.2021.10.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 10/05/2021] [Accepted: 10/11/2021] [Indexed: 11/29/2022]
Abstract
Inactivation experiments in auditory cortex (AC) produce widely varying results that complicate interpretations regarding the precise role of AC in auditory perception and ensuing behaviour. The advent of optogenetic methods in neuroscience offers previously unachievable insight into the mechanisms transforming brain activity into behaviour. With a view to aiding the design and interpretation of future studies in and outside AC, here we discuss the methodological challenges faced in manipulating neural activity. While considering AC’s role in auditory behaviour through the prism of inactivation experiments, we consider the factors that confound the interpretation of the effects of inactivation on behaviour, including the species, the type of inactivation, the behavioural task employed, and the exact location of the inactivation. Wide variation in the outcome of auditory cortex inactivation has been an impediment to clear conclusions regarding the roles of the auditory cortex in behaviour. Inactivation methods differ in their efficacy and specificity. The likelihood of observing a behavioural deficit is additionally influenced by factors such as the species being used, task design and reward. A synthesis of previous results suggests that auditory cortex involvement is critical for tasks that require integrating across multiple stimulus features, and less likely to be critical for simple feature discriminations. New methods of neural silencing provide opportunities for spatially and temporally precise manipulation of activity, allowing perturbation of individual subfields and specific circuits.
Collapse
|
13
|
Melchor J, Vergara J, Figueroa T, Morán I, Lemus L. Formant-Based Recognition of Words and Other Naturalistic Sounds in Rhesus Monkeys. Front Neurosci 2021; 15:728686. [PMID: 34776842 PMCID: PMC8586527 DOI: 10.3389/fnins.2021.728686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/08/2021] [Indexed: 11/21/2022] Open
Abstract
In social animals, identifying sounds is critical for communication. In humans, the acoustic parameters involved in speech recognition, such as the formant frequencies derived from the resonance of the supralaryngeal vocal tract, have been well documented. However, how formants contribute to recognizing learned sounds in non-human primates remains unclear. To determine this, we trained two rhesus monkeys to discriminate target and non-target sounds presented in sequences of 1–3 sounds. After training, we performed three experiments: (1) We tested the monkeys’ accuracy and reaction times during the discrimination of various acoustic categories; (2) their ability to discriminate morphing sounds; and (3) their ability to identify sounds consisting of formant 1 (F1), formant 2 (F2), or F1 and F2 (F1F2) pass filters. Our results indicate that macaques can learn diverse sounds and discriminate from morphs and formants F1 and F2, suggesting that information from few acoustic parameters suffice for recognizing complex sounds. We anticipate that future neurophysiological experiments in this paradigm may help elucidate how formants contribute to the recognition of sounds.
Collapse
Affiliation(s)
- Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - José Vergara
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
14
|
Amaro D, Ferreiro DN, Grothe B, Pecka M. Source identity shapes spatial preference in primary auditory cortex during active navigation. Curr Biol 2021; 31:3875-3883.e5. [PMID: 34192513 DOI: 10.1016/j.cub.2021.06.025] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/10/2021] [Accepted: 06/09/2021] [Indexed: 01/05/2023]
Abstract
Information about the position of sensory objects and identifying their concurrent behavioral relevance is vital to navigate the environment. In the auditory system, spatial information is computed in the brain based on the position of the sound source relative to the observer and thus assumed to be egocentric throughout the auditory pathway. This assumption is largely based on studies conducted in either anesthetized or head-fixed and passively listening animals, thus lacking self-motion and selective listening. Yet these factors are fundamental components of natural sensing1 that may crucially impact the nature of spatial coding and sensory object representation.2 How individual objects are neuronally represented during unrestricted self-motion and active sensing remains mostly unexplored. Here, we trained gerbils on a behavioral foraging paradigm that required localization and identification of sound sources during free navigation. Chronic tetrode recordings in primary auditory cortex during task performance revealed previously unreported sensory object representations. Strikingly, the egocentric angle preference of the majority of spatially sensitive neurons changed significantly depending on the task-specific identity (outcome association) of the sound source. Spatial tuning also exhibited large temporal complexity. Moreover, we encountered egocentrically untuned neurons whose response magnitude differed between source identities. Using a neural network decoder, we show that, together, these neuronal response ensembles provide spatiotemporally co-existent information about both the egocentric location and the identity of individual sensory objects during self-motion, revealing a novel cortical computation principle for naturalistic sensing.
Collapse
Affiliation(s)
- Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany
| | - Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Max Planck Institute of Neurobiology, Planegg-Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany.
| |
Collapse
|
15
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R. Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
16
|
Morán I, Perez-Orive J, Melchor J, Figueroa T, Lemus L. Auditory decisions in the supplementary motor area. Prog Neurobiol 2021; 202:102053. [PMID: 33957182 DOI: 10.1016/j.pneurobio.2021.102053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/06/2021] [Accepted: 04/20/2021] [Indexed: 10/21/2022]
Abstract
In human speech and communication across various species, recognizing and categorizing sounds is fundamental for the selection of appropriate behaviors. However, how does the brain decide which action to perform based on sounds? We explored whether the supplementary motor area (SMA), responsible for linking sensory information to motor programs, also accounts for auditory-driven decision making. To this end, we trained two rhesus monkeys to discriminate between numerous naturalistic sounds and words learned as target (T) or non-target (nT) categories. We found that the SMA at single and population neuronal levels perform decision-related computations that transition from auditory to movement representations in this task. Moreover, we demonstrated that the neural population is organized orthogonally during the auditory and the movement periods, implying that the SMA performs different computations. In conclusion, our results suggest that the SMA integrates acoustic information in order to form categorical signals that drive behavior.
Collapse
Affiliation(s)
- Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Javier Perez-Orive
- Instituto Nacional de Rehabilitacion "Luis Guillermo Ibarra Ibarra", Mexico City, Mexico
| | - Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico.
| |
Collapse
|
17
|
Robustness to Noise in the Auditory System: A Distributed and Predictable Property. eNeuro 2021; 8:ENEURO.0043-21.2021. [PMID: 33632813 PMCID: PMC7986545 DOI: 10.1523/eneuro.0043-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/17/2021] [Accepted: 02/17/2021] [Indexed: 12/30/2022] Open
Abstract
Background noise strongly penalizes auditory perception of speech in humans or vocalizations in animals. Despite this, auditory neurons are still able to detect communications sounds against considerable levels of background noise. We collected neuronal recordings in cochlear nucleus (CN), inferior colliculus (IC), auditory thalamus, and primary and secondary auditory cortex in response to vocalizations presented either against a stationary or a chorus noise in anesthetized guinea pigs at three signal-to-noise ratios (SNRs; −10, 0, and 10 dB). We provide evidence that, at each level of the auditory system, five behaviors in noise exist within a continuum, from neurons with high-fidelity representations of the signal, mostly found in IC and thalamus, to neurons with high-fidelity representations of the noise, mostly found in CN for the stationary noise and in similar proportions in each structure for the chorus noise. The two cortical areas displayed fewer robust responses than the IC and thalamus. Furthermore, between 21% and 72% of the neurons (depending on the structure) switch categories from one background noise to another, even if the initial assignment of these neurons to a category was confirmed by a severe bootstrap procedure. Importantly, supervised learning pointed out that assigning a recording to one of the five categories can be predicted up to a maximum of 70% based on both the response to signal alone and noise alone.
Collapse
|
18
|
Zempeltzi MM, Kisse M, Brunk MGK, Glemser C, Aksit S, Deane KE, Maurya S, Schneider L, Ohl FW, Deliano M, Happel MFK. Task rule and choice are reflected by layer-specific processing in rodent auditory cortical microcircuits. Commun Biol 2020; 3:345. [PMID: 32620808 PMCID: PMC7335110 DOI: 10.1038/s42003-020-1073-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 06/11/2020] [Indexed: 01/16/2023] Open
Abstract
The primary auditory cortex (A1) is an essential, integrative node that encodes the behavioral relevance of acoustic stimuli, predictions, and auditory-guided decision-making. However, the realization of this integration with respect to the cortical microcircuitry is not well understood. Here, we characterize layer-specific, spatiotemporal synaptic population activity with chronic, laminar current source density analysis in Mongolian gerbils (Meriones unguiculatus) trained in an auditory decision-making Go/NoGo shuttle-box task. We demonstrate that not only sensory but also task- and choice-related information is represented in the mesoscopic neuronal population code of A1. Based on generalized linear-mixed effect models we found a layer-specific and multiplexed representation of the task rule, action selection, and the animal's behavioral options as accumulating evidence in preparation of correct choices. The findings expand our understanding of how individual layers contribute to the integrative circuit in the sensory cortex in order to code task-relevant information and guide sensory-based decision-making.
Collapse
Affiliation(s)
| | - Martin Kisse
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | | | - Claudia Glemser
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Sümeyra Aksit
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Katrina E Deane
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Shivam Maurya
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Lina Schneider
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Frank W Ohl
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto von Guericke University, D-39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany
| | | | - Max F K Happel
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany.
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany.
| |
Collapse
|
19
|
Noise-Sensitive But More Precise Subcortical Representations Coexist with Robust Cortical Encoding of Natural Vocalizations. J Neurosci 2020; 40:5228-5246. [PMID: 32444386 DOI: 10.1523/jneurosci.2731-19.2020] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 05/08/2020] [Accepted: 05/15/2020] [Indexed: 01/30/2023] Open
Abstract
Humans and animals maintain accurate sound discrimination in the presence of loud sources of background noise. It is commonly assumed that this ability relies on the robustness of auditory cortex responses. However, only a few attempts have been made to characterize neural discrimination of communication sounds masked by noise at each stage of the auditory system and to quantify the noise effects on the neuronal discrimination in terms of alterations in amplitude modulations. Here, we measured neural discrimination between communication sounds masked by a vocalization-shaped stationary noise from multiunit responses recorded in the cochlear nucleus, inferior colliculus, auditory thalamus, and primary and secondary auditory cortex at several signal-to-noise ratios (SNRs) in anesthetized male or female guinea pigs. Masking noise decreased sound discrimination of neuronal populations in each auditory structure, but collicular and thalamic populations showed better performance than cortical populations at each SNR. In contrast, in each auditory structure, discrimination by neuronal populations was slightly decreased when tone-vocoded vocalizations were tested. These results shed new light on the specific contributions of subcortical structures to robust sound encoding, and suggest that the distortion of slow amplitude modulation cues conveyed by communication sounds is one of the factors constraining the neuronal discrimination in subcortical and cortical levels.SIGNIFICANCE STATEMENT Dissecting how auditory neurons discriminate communication sounds in noise is a major goal in auditory neuroscience. Robust sound coding in noise is often viewed as a specific property of cortical networks, although this remains to be demonstrated. Here, we tested the discrimination performance of neuronal populations at five levels of the auditory system in response to conspecific vocalizations masked by noise. In each acoustic condition, subcortical neurons better discriminated target vocalizations than cortical ones and in each structure, the reduction in discrimination performance was related to the reduction in slow amplitude modulation cues.
Collapse
|
20
|
Lieber JD, Bensmaia SJ. Emergence of an Invariant Representation of Texture in Primate Somatosensory Cortex. Cereb Cortex 2019; 30:3228-3239. [PMID: 31813989 PMCID: PMC7197205 DOI: 10.1093/cercor/bhz305] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 01/13/2023] Open
Abstract
A major function of sensory processing is to achieve neural representations of objects that are stable across changes in context and perspective. Small changes in exploratory behavior can lead to large changes in signals at the sensory periphery, thus resulting in ambiguous neural representations of objects. Overcoming this ambiguity is a hallmark of human object recognition across sensory modalities. Here, we investigate how the perception of tactile texture remains stable across exploratory movements of the hand, including changes in scanning speed, despite the concomitant changes in afferent responses. To this end, we scanned a wide range of everyday textures across the fingertips of rhesus macaques at multiple speeds and recorded the responses evoked in tactile nerve fibers and somatosensory cortical neurons (from Brodmann areas 3b, 1, and 2). We found that individual cortical neurons exhibit a wider range of speed-sensitivities than do nerve fibers. The resulting representations of speed and texture in cortex are more independent than are their counterparts in the nerve and account for speed-invariant perception of texture. We demonstrate that this separation of speed and texture information is a natural consequence of previously described cortical computations.
Collapse
Affiliation(s)
- Justin D Lieber
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA
| | - Sliman J Bensmaia
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA.,Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
21
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
22
|
Neurons in primary auditory cortex represent sound source location in a cue-invariant manner. Nat Commun 2019; 10:3019. [PMID: 31289272 PMCID: PMC6616358 DOI: 10.1038/s41467-019-10868-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 06/07/2019] [Indexed: 02/04/2023] Open
Abstract
Auditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sound sources in space remains unclear. Specifically, whether neurons in auditory cortex represent spatial cues or an integrated representation of auditory space across cues is not known. Here, we measured the spatial receptive fields of neurons in primary auditory cortex (A1) while ferrets performed a relative localisation task. Manipulating the availability of binaural and spectral localisation cues had little impact on ferrets’ performance, or on neural spatial tuning. A subpopulation of neurons encoded spatial position consistently across localisation cue type. Furthermore, neural firing pattern decoders outperformed two-channel model decoders using population activity. Together, these observations suggest that A1 encodes the location of sound sources, as opposed to spatial cue values. The brain's auditory cortex is involved not just in detection of sounds, but also in localizing them. Here, the authors show that neurons in ferret primary auditory cortex (A1) encode the location of sound sources, as opposed to merely reflecting spatial cues.
Collapse
|
23
|
Liu ST, Montes-Lourido P, Wang X, Sadagopan S. Optimal features for auditory categorization. Nat Commun 2019; 10:1302. [PMID: 30899018 PMCID: PMC6428858 DOI: 10.1038/s41467-019-09115-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/20/2019] [Indexed: 01/13/2023] Open
Abstract
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Collapse
Affiliation(s)
- Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, MD, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Otolaryngology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|