1
|
Sweeney CG, Thomas ME, Liu CJ, Vattino LG, Smith KE, Takesian AE. Reliable sensory processing of superficial cortical interneurons is modulated by behavioral state. Cell Rep 2025; 44:115678. [PMID: 40349343 DOI: 10.1016/j.celrep.2025.115678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Revised: 02/14/2025] [Accepted: 04/16/2025] [Indexed: 05/14/2025] Open
Abstract
GABAergic interneurons in cortical layer 1 (L1) integrate sensory and top-down inputs to modulate network activity and support learning-related plasticity. However, little is known about how sensory inputs drive L1 interneuron activity. We used two-photon calcium imaging to measure sound-evoked responses in two L1 interneuron populations expressing vasoactive intestinal peptide (VIP) or neuron-derived neurotrophic factor (NDNF) in mouse auditory cortex. We found that L1 interneurons respond to both simple and complex sounds, but their responses are highly variable across trials. Despite this variability, these interneurons respond reliably to a narrow range of stimuli, reflecting selectivity for specific spectrotemporal sound features. Response reliability was modulated by behavioral state and predicted by the activity of neighboring interneurons. These findings reveal that L1 interneurons exhibit sensory tuning and identify the modulation of response reliability as a potential mechanism by which L1 relays state-dependent cues to shape sensory representations.
Collapse
Affiliation(s)
- Carolyn G Sweeney
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Maryse E Thomas
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Christine Junhui Liu
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA; Graduate Program in Speech and Hearing and Bioscience and Technologies, Harvard Medical School, Boston, MA, USA
| | - Lucas G Vattino
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Kasey E Smith
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
| | - Anne E Takesian
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
2
|
Dipoppa M, Nogueira R, Bugeon S, Friedman Y, Reddy CB, Harris KD, Ringach DL, Miller KD, Carandini M, Fusi S. Adaptation shapes the representational geometry in mouse V1 to efficiently encode the environment. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.12.11.628035. [PMID: 39896460 PMCID: PMC11785004 DOI: 10.1101/2024.12.11.628035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/04/2025]
Abstract
Sensory adaptation dynamically changes neural responses as a function of previous stimuli, profoundly impacting perception. The response changes induced by adaptation have been characterized in detail in individual neurons and at the population level after averaging across trials. However, it is not clear how adaptation modifies the aspects of the representations that relate more directly to the ability to perceive stimuli, such as their geometry and the noise structure in individual trials. To address this question, we recorded from a population of neurons in the mouse visual cortex and presented one stimulus (an oriented grating) more frequently than the others. We then analyzed these data in terms of representational geometry and studied the ability of a linear decoder to discriminate between similar visual stimuli based on the single-trial population responses. Surprisingly, the discriminability of stimuli near the adaptor increased, even though the responses of individual neurons to these stimuli decreased. Similar changes were observed in artificial neural networks trained to reconstruct the visual stimulus under metabolic constraints. We conclude that the paradoxical effects of adaptation are consistent with the efficient coding framework, allowing the brain to improve the representation of frequent stimuli while limiting the associated metabolic cost.
Collapse
Affiliation(s)
- Mario Dipoppa
- Department of Neurobiology, University of California, Los Angeles, CA, USA
- Center for Theoretical Neuroscience, Zuckerman Institute for Brain Mind and Behavior, Columbia University, NY, USA
- Institute of Neurology, University College London, UK
| | - Ramon Nogueira
- Center for Theoretical Neuroscience, Zuckerman Institute for Brain Mind and Behavior, Columbia University, NY, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
| | | | - Yoni Friedman
- Center for Theoretical Neuroscience, Zuckerman Institute for Brain Mind and Behavior, Columbia University, NY, USA
- Massachusetts Institute of Technology, MA, USA
| | | | | | - Dario L. Ringach
- Department of Neurobiology, University of California, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Kenneth D. Miller
- Center for Theoretical Neuroscience, Zuckerman Institute for Brain Mind and Behavior, Columbia University, NY, USA
- Kavli Institute for Brain Science, Columbia University, NY, USA
| | | | - Stefano Fusi
- Center for Theoretical Neuroscience, Zuckerman Institute for Brain Mind and Behavior, Columbia University, NY, USA
- Kavli Institute for Brain Science, Columbia University, NY, USA
| |
Collapse
|
3
|
den Brinker AC, Ouweltjes O, Rietman R, Thackray-Nocera S, Crooks MG, Morice AH. Nighttime Cough Characteristics in Chronic Obstructive Pulmonary Disease Patients. SENSORS (BASEL, SWITZERLAND) 2025; 25:404. [PMID: 39860774 PMCID: PMC11768643 DOI: 10.3390/s25020404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 12/27/2024] [Accepted: 01/08/2025] [Indexed: 01/27/2025]
Abstract
Coughing is a symptom of many respiratory diseases. An increased amount of coughs may signal an (upcoming) health issue, while a decreasing amount of coughs may indicate an improved health status. The presence of a cough can be identified by a cough classifier. The cough density fluctuates considerably over the course of a day with a pattern that is highly subject-dependent. This paper provides a case study of cough patterns from Chronic Obstructive Pulmonary Disease (COPD) patients as determined by a stationary semi-automated cough monitor. It clearly demonstrates the variability of cough density over the observation time, its patient specificity and dependence on health status. Furthermore, an earlier established empirical finding of a linear relation between mean and standard deviation of a session's cough count is validated. An alert mechanism incorporating these findings is described.
Collapse
Affiliation(s)
| | - Okke Ouweltjes
- Philips Digital Standardization & Licensing Research, 5656 AE Eindhoven, The Netherlands;
| | - Ronald Rietman
- Philips I&S, Innovation Engineering, Data Science and AI, 5656 AE Eindhoven, The Netherlands;
| | - Susannah Thackray-Nocera
- Department of Academic Respiratory Medicine, Centre for Cardiovascular and Metabolic Research, Hull York Medical School, Cottingham HU16 5JQ, UK; (S.T.-N.); (M.G.C.); (A.H.M.)
| | - Michael G. Crooks
- Department of Academic Respiratory Medicine, Centre for Cardiovascular and Metabolic Research, Hull York Medical School, Cottingham HU16 5JQ, UK; (S.T.-N.); (M.G.C.); (A.H.M.)
| | - Alyn H. Morice
- Department of Academic Respiratory Medicine, Centre for Cardiovascular and Metabolic Research, Hull York Medical School, Cottingham HU16 5JQ, UK; (S.T.-N.); (M.G.C.); (A.H.M.)
| |
Collapse
|
4
|
Vogler NW, Chen R, Virkler A, Tu VY, Gottfried JA, Geffen MN. Direct Piriform-to-Auditory Cortical Projections Shape Auditory-Olfactory Integration. J Neurosci 2024; 44:e1140242024. [PMID: 39510831 PMCID: PMC11622214 DOI: 10.1523/jneurosci.1140-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 09/12/2024] [Accepted: 10/09/2024] [Indexed: 11/15/2024] Open
Abstract
In a real-world environment, the brain must integrate information from multiple sensory modalities, including the auditory and olfactory systems. However, little is known about the neuronal circuits governing how odors influence and modulate sound processing. Here, we investigated the mechanisms underlying auditory-olfactory integration using anatomical, electrophysiological, and optogenetic approaches, focusing on the auditory cortex as a key locus for cross-modal integration. First, retrograde and anterograde viral tracing strategies revealed a direct projection from the piriform cortex to the auditory cortex. Next, using in vivo electrophysiological recordings of neuronal activity in the auditory cortex of awake male or female mice, we found that odors modulate auditory cortical responses to sound. Finally, we used in vivo optogenetic manipulations during electrophysiology to demonstrate that olfactory modulation in the auditory cortex, specifically, odor-driven enhancement of sound responses, depends on direct input from the piriform cortex. Together, our results identify a novel role of piriform-to-auditory cortical circuitry in shaping olfactory modulation in the auditory cortex, shedding new light on the neuronal mechanisms underlying auditory-olfactory integration.
Collapse
Affiliation(s)
- Nathan W Vogler
- Departments of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Ruoyi Chen
- Departments of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Alister Virkler
- Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Violet Y Tu
- Departments of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Jay A Gottfried
- Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Maria N Geffen
- Departments of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
- Neurology, Perelman School of Medicine, University of Pennsylvania
- Neuroscience, Perelman School of Medicine, University of Pennsylvania
| |
Collapse
|
5
|
De A, Agarwalla S, Kaushik R, Mandal D, Bandyopadhyay S. Differential Encoding of Two-Tone Harmonics in the Male and Female Mouse Auditory Cortex. J Neurosci 2024; 44:e0364242024. [PMID: 39299802 PMCID: PMC11529816 DOI: 10.1523/jneurosci.0364-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 09/08/2024] [Accepted: 09/11/2024] [Indexed: 09/22/2024] Open
Abstract
Harmonics are an integral part of music, speech, and vocalizations of animals. Since the rest of the auditory environment is primarily made up of nonharmonic sounds, the auditory system needs to perceptually separate the above two kinds of sounds. In mice, harmonics, generally with two-tone components (two-tone harmonic complexes, TTHCs), form an important component of vocal communication. Communication by pups during isolation from the mother and by adult males during courtship elicits typical behaviors in female mice-dams and adult courting females, respectively. Our study shows that the processing of TTHC is specialized in mice providing neural basis for perceptual differences between tones and TTHCs and also nonharmonic sounds. Investigation of responses in the primary auditory cortex (Au1) from in vivo extracellular recordings and two-photon Ca2+ imaging of excitatory and inhibitory neurons to TTHCs exhibit enhancement, suppression, or no-effect with respect to tones. Irrespective of neuron type, harmonic enhancement is maximized, and suppression is minimized when the fundamental frequencies (F 0) match the neuron's best fundamental frequency (BF0). Sex-specific processing of TTHC is evident from differences in the distributions of neurons' best frequency (BF) and best fundamental frequency (BF0) in single units, differences in harmonic suppressed cases re-BF0, independent of neuron types, and from pairwise noise correlations among excitatory and parvalbumin inhibitory interneurons. Furthermore, TTHCs elicit a higher response compared with two-tone nonharmonics in females, but not in males. Thus, our study shows specialized neural processing of TTHCs over tones and nonharmonics, highlighting local network specialization among different neuronal types.
Collapse
Affiliation(s)
- Amiyangshu De
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, IIT Kharagpur, Kharagpur 721302, India
| | - Swapna Agarwalla
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721302, India
| | - Raghavendra Kaushik
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721302, India
- Mechanical Engineering Department, IIT Kharagpur, Kharagpur 721302, India
| | - Debdut Mandal
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721302, India
| | - Sharba Bandyopadhyay
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, IIT Kharagpur, Kharagpur 721302, India
| |
Collapse
|
6
|
Norman-Haignere SV, Keshishian MK, Devinsky O, Doyle W, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Temporal integration in human auditory cortex is predominantly yoked to absolute time, not structure duration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.23.614358. [PMID: 39386565 PMCID: PMC11463558 DOI: 10.1101/2024.09.23.614358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Sound structures such as phonemes and words have highly variable durations. Thus, there is a fundamental difference between integrating across absolute time (e.g., 100 ms) vs. sound structure (e.g., phonemes). Auditory and cognitive models have traditionally cast neural integration in terms of time and structure, respectively, but the extent to which cortical computations reflect time or structure remains unknown. To answer this question, we rescaled the duration of all speech structures using time stretching/compression and measured integration windows in the human auditory cortex using a new experimental/computational method applied to spatiotemporally precise intracranial recordings. We observed significantly longer integration windows for stretched speech, but this lengthening was very small (~5%) relative to the change in structure durations, even in non-primary regions strongly implicated in speech-specific processing. These findings demonstrate that time-yoked computations dominate throughout the human auditory cortex, placing important constraints on neurocomputational models of structure processing.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- University of Rochester Medical Center, Department of Biostatistics and Computational Biology
- University of Rochester Medical Center, Department of Neuroscience
- University of Rochester, Department of Brain and Cognitive Sciences
- University of Rochester, Department of Biomedical Engineering
- Zuckerman Institute for Mind Brain and Behavior, Columbia University
| | - Menoua K. Keshishian
- Zuckerman Institute for Mind Brain and Behavior, Columbia University
- Department of Electrical Engineering, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center
- Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center
- Department of Neurosurgery, NYU Langone Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center
- Comprehensive Epilepsy Center, NYU Langone Medical Center
- Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Institute for Mind Brain and Behavior, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
7
|
Vogler NW, Chen R, Virkler A, Tu VY, Gottfried JA, Geffen MN. Direct piriform-to-auditory cortical projections shape auditory-olfactory integration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.11.602976. [PMID: 39071445 PMCID: PMC11275881 DOI: 10.1101/2024.07.11.602976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
In a real-world environment, the brain must integrate information from multiple sensory modalities, including the auditory and olfactory systems. However, little is known about the neuronal circuits governing how odors influence and modulate sound processing. Here, we investigated the mechanisms underlying auditory-olfactory integration using anatomical, electrophysiological, and optogenetic approaches, focusing on the auditory cortex as a key locus for cross-modal integration. First, retrograde and anterograde viral tracing strategies revealed a direct projection from the piriform cortex to the auditory cortex. Next, using in vivo electrophysiological recordings of neuronal activity in the auditory cortex of awake male or female mice, we found that odors modulate auditory cortical responses to sound. Finally, we used in vivo optogenetic manipulations during electrophysiology to demonstrate that olfactory modulation in auditory cortex, specifically, odor-driven enhancement of sound responses, depends on direct input from the piriform cortex. Together, our results identify a novel role of piriform-to-auditory cortical circuitry in shaping olfactory modulation in the auditory cortex, shedding new light on the neuronal mechanisms underlying auditory-olfactory integration.
Collapse
Affiliation(s)
- Nathan W. Vogler
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Ruoyi Chen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Alister Virkler
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Violet Y. Tu
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Jay A. Gottfried
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Maria N. Geffen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| |
Collapse
|
8
|
Vattino LG, MacGregor CP, Liu CJ, Sweeney CG, Takesian AE. Primary auditory thalamus relays directly to cortical layer 1 interneurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.16.603741. [PMID: 39071266 PMCID: PMC11275971 DOI: 10.1101/2024.07.16.603741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Inhibitory interneurons within cortical layer 1 (L1-INs) integrate inputs from diverse brain regions to modulate sensory processing and plasticity, but the sensory inputs that recruit these interneurons have not been identified. Here we used monosynaptic retrograde tracing and whole-cell electrophysiology to characterize the thalamic inputs onto two major subpopulations of L1-INs in the mouse auditory cortex. We find that the vast majority of auditory thalamic inputs to these L1-INs unexpectedly arise from the ventral subdivision of the medial geniculate body (MGBv), the tonotopically-organized primary auditory thalamus. Moreover, these interneurons receive robust functional monosynaptic MGBv inputs that are comparable to those recorded in the L4 excitatory pyramidal neurons. Our findings identify a direct pathway from the primary auditory thalamus to the L1-INs, suggesting that these interneurons are uniquely positioned to integrate thalamic inputs conveying precise sensory information with top-down inputs carrying information about brain states and learned associations.
Collapse
Affiliation(s)
- Lucas G. Vattino
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Cathryn P. MacGregor
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
- These authors contributed equally to this work
| | - Christine Junhui Liu
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
- Graduate Program in Speech and Hearing and Bioscience and Technologies, Harvard Medical School, Boston, MA, USA
- These authors contributed equally to this work
| | - Carolyn G. Sweeney
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| | - Anne E. Takesian
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
9
|
Wikman P, Salmela V, Sjöblom E, Leminen M, Laine M, Alho K. Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network. PLoS Biol 2024; 22:e3002534. [PMID: 38466713 PMCID: PMC10957087 DOI: 10.1371/journal.pbio.3002534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/21/2024] [Accepted: 01/30/2024] [Indexed: 03/13/2024] Open
Abstract
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Eetu Sjöblom
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- AI and Analytics Unit, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
10
|
The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience. INFORMATION 2023. [DOI: 10.3390/info14020082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in this concept paper on the basis of examples drawn from the original code and from some of the most recent related empirical findings on contextual modulation in the brain, highlighting the potential of Grossberg’s pioneering insights and groundbreaking theoretical work for intelligent solutions in the domain of developmental and cognitive robotics.
Collapse
|
11
|
Audette NJ, Zhou W, La Chioma A, Schneider DM. Precise movement-based predictions in the mouse auditory cortex. Curr Biol 2022; 32:4925-4940.e6. [PMID: 36283411 PMCID: PMC9691550 DOI: 10.1016/j.cub.2022.09.064] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 09/15/2022] [Accepted: 09/30/2022] [Indexed: 11/05/2022]
Abstract
Many of the sensations experienced by an organism are caused by their own actions, and accurately anticipating both the sensory features and timing of self-generated stimuli is crucial to a variety of behaviors. In the auditory cortex, neural responses to self-generated sounds exhibit frequency-specific suppression, suggesting that movement-based predictions may be implemented early in sensory processing. However, it remains unknown whether this modulation results from a behaviorally specific and temporally precise prediction, nor is it known whether corresponding expectation signals are present locally in the auditory cortex. To address these questions, we trained mice to expect the precise acoustic outcome of a forelimb movement using a closed-loop sound-generating lever. Dense neuronal recordings in the auditory cortex revealed suppression of responses to self-generated sounds that was specific to the expected acoustic features, to a precise position within the movement, and to the movement that was coupled to sound during training. Prediction-based suppression was concentrated in L2/3 and L5, where deviations from expectation also recruited a population of prediction-error neurons that was otherwise unresponsive. Recording in the absence of sound revealed abundant movement signals in deep layers that were biased toward neurons tuned to the expected sound, as well as expectation signals that were present throughout the cortex and peaked at the time of expected auditory feedback. Together, these findings identify distinct populations of auditory cortical neurons with movement, expectation, and error signals consistent with a learned internal model linking an action to its specific acoustic outcome.
Collapse
Affiliation(s)
- Nicholas J Audette
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - WenXi Zhou
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - Alessandro La Chioma
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - David M Schneider
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA.
| |
Collapse
|
12
|
Chai X, Liu M, Huang T, Wu M, Li J, Zhao X, Yan T, Song Y, Zhang YX. Neurophysiological evidence for goal-oriented modulation of speech perception. Cereb Cortex 2022; 33:3910-3921. [PMID: 35972410 DOI: 10.1093/cercor/bhac315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 11/14/2022] Open
Abstract
Speech perception depends on the dynamic interplay of bottom-up and top-down information along a hierarchically organized cortical network. Here, we test, for the first time in the human brain, whether neural processing of attended speech is dynamically modulated by task demand using a context-free discrimination paradigm. Electroencephalographic signals were recorded during 3 parallel experiments that differed only in the phonological feature of discrimination (word, vowel, and lexical tone, respectively). The event-related potentials (ERPs) revealed the task modulation of speech processing at approximately 200 ms (P2) after stimulus onset, probably influencing what phonological information to retain in memory. For the phonological comparison of sequential words, task modulation occurred later at approximately 300 ms (N3 and P3), reflecting the engagement of task-specific cognitive processes. The ERP results were consistent with the changes in delta-theta neural oscillations, suggesting the involvement of cortical tracking of speech envelopes. The study thus provides neurophysiological evidence for goal-oriented modulation of attended speech and calls for speech perception models incorporating limited memory capacity and goal-oriented optimization mechanisms.
Collapse
Affiliation(s)
- Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ting Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tingting Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
13
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
14
|
Rice A, Širović A, Hildebrand JA, Wood M, Carbaugh-Rutland A, Baumann-Pickering S. Update on frequency decline of Northeast Pacific blue whale (Balaenoptera musculus) calls. PLoS One 2022; 17:e0266469. [PMID: 35363831 PMCID: PMC8975115 DOI: 10.1371/journal.pone.0266469] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 03/21/2022] [Indexed: 12/31/2022] Open
Abstract
Worldwide, the frequency (pitch) of blue whale (Balaenoptera musculus) calls has been decreasing since first recorded in the 1960s. This frequency decline occurs over annual and inter-annual timescales and has recently been documented in other baleen whale species, yet it remains unexplained. In the Northeast Pacific, blue whales produce two calls, or units, that, when regularly repeated, are referred to as song: A and B calls. In this population, frequency decline has thus far only been examined in B calls. In this work, passive acoustic data collected in the Southern California Bight from 2006 to 2019 were examined to determine if A calls are also declining in frequency and whether the call pulse rate was similarly impacted. Additionally, frequency measurements were made for B calls to determine whether the rate of frequency decline is the same as was calculated when this phenomenon was first reported in 2009. We found that A calls decreased at a rate of 0.32 Hz yr-1 during this period and that B calls were still decreasing, albeit at a slower rate (0.27 Hz yr-1) than reported previously. The A call pulse rate also declined over the course of the study, at a rate of 0.006 pulses/s yr-1. With this updated information, we consider the various theories that have been proposed to explain frequency decline in blue whales. We conclude that no current theory adequately accounts for all aspects of this phenomenon and consider the role that individual perception of song frequency may play. To understand the cause behind call frequency decline, future studies might want to explore the function of these songs and the mechanism for their synchronization. The ubiquitous nature of the frequency shift phenomenon may indicate a consistent level of vocal plasticity and fine auditory processing abilities across baleen whale species.
Collapse
Affiliation(s)
- Ally Rice
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, CA, United States of America
- * E-mail:
| | - Ana Širović
- Texas A&M University at Galveston, Galveston, TX, United States of America
| | - John A. Hildebrand
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, CA, United States of America
| | - Megan Wood
- Texas A&M University at Galveston, Galveston, TX, United States of America
| | | | - Simone Baumann-Pickering
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, CA, United States of America
| |
Collapse
|
15
|
Norman-Haignere SV, Long LK, Devinsky O, Doyle W, Irobunda I, Merricks EM, Feldstein NA, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat Hum Behav 2022; 6:455-469. [PMID: 35145280 PMCID: PMC8957490 DOI: 10.1038/s41562-021-01261-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 11/18/2021] [Indexed: 01/11/2023]
Abstract
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,HHMI Postdoctoral Fellow of the Life Sciences Research Foundation
| | - Laura K. Long
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Neurosurgery, NYU Langone Medical Center
| | - Ifeoma Irobunda
- Department of Neurology, Columbia University Irving Medical Center
| | | | - Neil A. Feldstein
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University,Department of Electrical Engineering, Columbia University
| |
Collapse
|
16
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
17
|
Lee J, Rothschild G. Encoding of acquired sound-sequence salience by auditory cortical offset responses. Cell Rep 2021; 37:109927. [PMID: 34731615 DOI: 10.1016/j.celrep.2021.109927] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/19/2021] [Accepted: 10/12/2021] [Indexed: 11/25/2022] Open
Abstract
Behaviorally relevant sounds are often composed of distinct acoustic units organized into specific temporal sequences. The meaning of such sound sequences can therefore be fully recognized only when they have terminated. However, the neural mechanisms underlying the perception of sound sequences remain unclear. Here, we use two-photon calcium imaging in the auditory cortex of behaving mice to test the hypothesis that neural responses to termination of sound sequences ("Off-responses") encode their acoustic history and behavioral salience. We find that auditory cortical Off-responses encode preceding sound sequences and that learning to associate a sound sequence with a reward induces enhancement of Off-responses relative to responses during the sound sequence ("On-responses"). Furthermore, learning enhances network-level discriminability of sound sequences by Off-responses. Last, learning-induced plasticity of Off-responses but not On-responses lasts to the next day. These findings identify auditory cortical Off-responses as a key neural signature of acquired sound-sequence salience.
Collapse
Affiliation(s)
- Joonyeup Lee
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA; Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
18
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
19
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
20
|
Wikman P, Sahari E, Salmela V, Leminen A, Leminen M, Laine M, Alho K. Breaking down the cocktail party: Attentional modulation of cerebral audiovisual speech processing. Neuroimage 2020; 224:117365. [PMID: 32941985 DOI: 10.1016/j.neuroimage.2020.117365] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 08/19/2020] [Accepted: 09/07/2020] [Indexed: 12/20/2022] Open
Abstract
Recent studies utilizing electrophysiological speech envelope reconstruction have sparked renewed interest in the cocktail party effect by showing that auditory neurons entrain to selectively attended speech. Yet, the neural networks of attention to speech in naturalistic audiovisual settings with multiple sound sources remain poorly understood. We collected functional brain imaging data while participants viewed audiovisual video clips of lifelike dialogues with concurrent distracting speech in the background. Dialogues were presented in a full-factorial design, comprising task (listen to the dialogues vs. ignore them), audiovisual quality and semantic predictability. We used univariate analyses in combination with multivariate pattern analysis (MVPA) to study modulations of brain activity related to attentive processing of audiovisual speech. We found attentive speech processing to cause distinct spatiotemporal modulation profiles in distributed cortical areas including sensory and frontal-control networks. Semantic coherence modulated attention-related activation patterns in the earliest stages of auditory cortical processing, suggesting that the auditory cortex is involved in high-level speech processing. Our results corroborate views that emphasize the dynamic nature of attention, with task-specificity and context as cornerstones of the underlying neuro-cognitive mechanisms.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Elisa Sahari
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Alina Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Digital Humanities, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Phoniatrics, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
21
|
Harun R, Jun E, Park HH, Ganupuru P, Goldring AB, Hanks TD. Timescales of Evidence Evaluation for Decision Making and Associated Confidence Judgments Are Adapted to Task Demands. Front Neurosci 2020; 14:826. [PMID: 32903672 PMCID: PMC7438826 DOI: 10.3389/fnins.2020.00826] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Accepted: 07/15/2020] [Indexed: 01/29/2023] Open
Abstract
Decision making often involves choosing actions based on relevant evidence. This can benefit from focussing evidence evaluation on the timescale of greatest relevance based on the situation. Here, we use an auditory change detection task to determine how people adjust their timescale of evidence evaluation depending on task demands for detecting changes in their environment and assessing their internal confidence in those decisions. We confirm previous results that people adopt shorter timescales of evidence evaluation for detecting changes in contexts with shorter signal durations, while bolstering those results with model-free analyses not previously used and extending the results to the auditory domain. We also extend these results to show that in contexts with shorter signal durations, people also adopt correspondingly shorter timescales of evidence evaluation for assessing confidence in their decision about detecting a change. These results provide important insights into adaptability and flexible control of evidence evaluation for decision making.
Collapse
Affiliation(s)
- Rashed Harun
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Elizabeth Jun
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Heui Hye Park
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Preetham Ganupuru
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Adam B Goldring
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Timothy D Hanks
- Department of Neurology and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| |
Collapse
|
22
|
Kaya EM, Huang N, Elhilali M. Pitch, Timbre and Intensity Interdependently Modulate Neural Responses to Salient Sounds. Neuroscience 2020; 440:1-14. [PMID: 32445938 DOI: 10.1016/j.neuroscience.2020.05.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 04/28/2020] [Accepted: 05/10/2020] [Indexed: 01/31/2023]
Abstract
As we listen to everyday sounds, auditory perception is heavily shaped by interactions between acoustic attributes such as pitch, timbre and intensity; though it is not clear how such interactions affect judgments of acoustic salience in dynamic soundscapes. Salience perception is believed to rely on an internal brain model that tracks the evolution of acoustic characteristics of a scene and flags events that do not fit this model as salient. The current study explores how the interdependency between attributes of dynamic scenes affects the neural representation of this internal model and shapes encoding of salient events. Specifically, the study examines how deviations along combinations of acoustic attributes interact to modulate brain responses, and subsequently guide perception of certain sound events as salient given their context. Human volunteers have their attention focused on a visual task and ignore acoustic melodies playing in the background while their brain activity using electroencephalography is recorded. Ambient sounds consist of musical melodies with probabilistically-varying acoustic attributes. Salient notes embedded in these scenes deviate from the melody's statistical distribution along pitch, timbre and/or intensity. Recordings of brain responses to salient notes reveal that neural power in response to the melodic rhythm as well as cross-trial phase alignment in the theta band are modulated by degree of salience of the notes, estimated across all acoustic attributes given their probabilistic context. These neural nonlinear effects across attributes strongly parallel behavioral nonlinear interactions observed in perceptual judgments of auditory salience using similar dynamic melodies; suggesting a neural underpinning of nonlinear interactions that underlie salience perception.
Collapse
Affiliation(s)
- Emine Merve Kaya
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD, USA
| | - Nicolas Huang
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD, USA
| | - Mounya Elhilali
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
23
|
Maor I, Shwartz-Ziv R, Feigin L, Elyada Y, Sompolinsky H, Mizrahi A. Neural Correlates of Learning Pure Tones or Natural Sounds in the Auditory Cortex. Front Neural Circuits 2020; 13:82. [PMID: 32047424 PMCID: PMC6997498 DOI: 10.3389/fncir.2019.00082] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/17/2019] [Indexed: 11/17/2022] Open
Abstract
Associative learning of pure tones is known to cause tonotopic map expansion in the auditory cortex (ACx), but the function this plasticity sub-serves is unclear. We developed an automated training platform called the “Educage,” which was used to train mice on a go/no-go auditory discrimination task to their perceptual limits, for difficult discriminations among pure tones or natural sounds. Spiking responses of excitatory and inhibitory parvalbumin (PV+) L2/3 neurons in mouse ACx revealed learning-induced overrepresentation of the learned frequencies, as expected from previous literature. The coordinated plasticity of excitatory and inhibitory neurons supports a role for PV+ neurons in homeostatic maintenance of excitation–inhibition balance within the circuit. Using a novel computational model to study auditory tuning curves, we show that overrepresentation of the learned tones does not necessarily improve discrimination performance of the network to these tones. In a separate set of experiments, we trained mice to discriminate among natural sounds. Perceptual learning of natural sounds induced “sparsening” and decorrelation of the neural response, consequently improving discrimination of these complex sounds. This signature of plasticity in A1 highlights its role in coding natural sounds.
Collapse
Affiliation(s)
- Ido Maor
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ravid Shwartz-Ziv
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Libi Feigin
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yishai Elyada
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Haim Sompolinsky
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Adi Mizrahi
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
24
|
Roach JP, Eniwaye B, Booth V, Sander LM, Zochowski MR. Acetylcholine Mediates Dynamic Switching Between Information Coding Schemes in Neuronal Networks. Front Syst Neurosci 2019; 13:64. [PMID: 31780905 PMCID: PMC6861375 DOI: 10.3389/fnsys.2019.00064] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 10/14/2019] [Indexed: 11/23/2022] Open
Abstract
Rate coding and phase coding are the two major coding modes seen in the brain. For these two modes, network dynamics must either have a wide distribution of frequencies for rate coding, or a narrow one to achieve stability in phase dynamics for phase coding. Acetylcholine (ACh) is a potent regulator of neural excitability. Acting through the muscarinic receptor, ACh reduces the magnitude of the potassium M-current, a hyperpolarizing current that builds up as neurons fire. The M-current contributes to several excitability features of neurons, becoming a major player in facilitating the transition between Type 1 (integrator) and Type 2 (resonator) excitability. In this paper we argue that this transition enables a dynamic switch between rate coding and phase coding as levels of ACh release change. When a network is in a high ACh state variations in synaptic inputs will lead to a wider distribution of firing rates across the network and this distribution will reflect the network structure or pattern of external input to the network. When ACh is low, network frequencies become narrowly distributed and the structure of a network or pattern of external inputs will be represented through phase relationships between firing neurons. This work provides insights into how modulation of neuronal features influences network dynamics and information processing across brain states.
Collapse
Affiliation(s)
- James P Roach
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States
| | - Bolaji Eniwaye
- Department of Physics, University of Michigan, Ann Arbor, MI, United States
| | - Victoria Booth
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Mathematics, University of Michigan, Ann Arbor, MI, United States.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| | - Leonard M Sander
- Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States
| | - Michal R Zochowski
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States.,Biophysics Program, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
25
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
26
|
Schicknick H, Henschke JU, Budinger E, Ohl FW, Gundelfinger ED, Tischmeyer W. β-adrenergic modulation of discrimination learning and memory in the auditory cortex. Eur J Neurosci 2019; 50:3141-3163. [PMID: 31162753 PMCID: PMC6900137 DOI: 10.1111/ejn.14480] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 05/27/2019] [Accepted: 05/31/2019] [Indexed: 01/11/2023]
Abstract
Despite vast literature on catecholaminergic neuromodulation of auditory cortex functioning in general, knowledge about its role for long‐term memory formation is scarce. Our previous pharmacological studies on cortex‐dependent frequency‐modulated tone‐sweep discrimination learning of Mongolian gerbils showed that auditory‐cortical D1/5‐dopamine receptor activity facilitates memory consolidation and anterograde memory formation. Considering overlapping functions of D1/5‐dopamine receptors and β‐adrenoceptors, we hypothesised a role of β‐adrenergic signalling in the auditory cortex for sweep discrimination learning and memory. Supporting this hypothesis, the β1/2‐adrenoceptor antagonist propranolol bilaterally applied to the gerbil auditory cortex after task acquisition prevented the discrimination increment that was normally monitored 1 day later. The increment in the total number of hurdle crossings performed in response to the sweeps per se was normal. Propranolol infusion after the seventh training session suppressed the previously established sweep discrimination. The suppressive effect required antagonist injection in a narrow post‐session time window. When applied to the auditory cortex 1 day before initial conditioning, β1‐adrenoceptor‐antagonising and β1‐adrenoceptor‐stimulating agents retarded and facilitated, respectively, sweep discrimination learning, whereas β2‐selective drugs were ineffective. In contrast, single‐sweep detection learning was normal after propranolol infusion. By immunohistochemistry, β1‐ and β2‐adrenoceptors were identified on the neuropil and somata of pyramidal and non‐pyramidal neurons of the gerbil auditory cortex. The present findings suggest that β‐adrenergic signalling in the auditory cortex has task‐related importance for discrimination learning of complex sounds: as previously shown for D1/5‐dopamine receptor signalling, β‐adrenoceptor activity supports long‐term memory consolidation and reconsolidation; additionally, tonic input through β1‐adrenoceptors may control mechanisms permissive for memory acquisition.
Collapse
Affiliation(s)
- Horst Schicknick
- Special Lab Molecular Biological Techniques, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute of Biology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Eckart D Gundelfinger
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department Neurochemistry and Molecular Biology, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Molecular Neurobiology, Medical Faculty, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Wolfgang Tischmeyer
- Special Lab Molecular Biological Techniques, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
27
|
Dong M, Vicario DS. Neural Correlate of Transition Violation and Deviance Detection in the Songbird Auditory Forebrain. Front Syst Neurosci 2018; 12:46. [PMID: 30356811 PMCID: PMC6190688 DOI: 10.3389/fnsys.2018.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 09/18/2018] [Indexed: 12/21/2022] Open
Abstract
Deviants are stimuli that violate one's prediction about the incoming stimuli. Studying deviance detection helps us understand how nervous system learns temporal patterns between stimuli and forms prediction about the future. Detecting deviant stimuli is also critical for animals' survival in the natural environment filled with complex sounds and patterns. Using natural songbird vocalizations as stimuli, we recorded multi-unit and single-unit activity from the zebra finch auditory forebrain while presenting rare repeated stimuli after regular alternating stimuli (alternating oddball experiment) or rare deviant among multiple different common stimuli (context oddball experiment). The alternating oddball experiment showed that neurons were sensitive to rare repetitions in regular alternations. In the absence of expectation, repetition suppresses neural responses to the 2nd stimulus in the repetition. When repetition violates expectation, neural responses to the 2nd stimulus in the repetition were stronger than expected. The context oddball experiment showed that a stimulus elicits stronger neural responses when it is presented infrequently as a deviant among multiple common stimuli. As the acoustic differences between deviant and common stimuli increase, the response enhancement also increases. These results together showed that neural encoding of a stimulus depends not only on the acoustic features of the stimulus but also on the preceding stimuli and the transition patterns between them. These results also imply that the classical oddball effect may result from a combination of repetition suppression and deviance enhancement. Classification analyses showed that the difficulties in decoding the stimulus responsible for the neural responses differed for deviants in different experimental conditions. These findings suggest that learning transition patterns and detecting deviants in natural sequences may depend on a hierarchy of neural mechanisms, which may be involved in more complex forms of auditory processing that depend on the transition patterns between stimuli, such as speech processing.
Collapse
Affiliation(s)
- Mingwen Dong
- Behavior and Systems Neuroscience, Psychology Department, Rutgers, the State University of New Jersey, New Brunswick, NJ, United States
| | - David S Vicario
- Behavior and Systems Neuroscience, Psychology Department, Rutgers, the State University of New Jersey, New Brunswick, NJ, United States
| |
Collapse
|