1
|
Ying R, Stolzberg DJ, Caras ML. Neural Correlates of Perceptual Plasticity in the Auditory Midbrain and Thalamus. J Neurosci 2025; 45:e0691242024. [PMID: 39753303 PMCID: PMC11884394 DOI: 10.1523/jneurosci.0691-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 12/05/2024] [Accepted: 12/17/2024] [Indexed: 03/08/2025] Open
Abstract
Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the sensitivity of auditory cortical (ACX) neurons support many aspects of perceptual plasticity, the contribution of subcortical auditory regions to this process is less understood. Here, we recorded single- and multiunit activity from the central nucleus of the inferior colliculus (ICC) and the ventral subdivision of the medial geniculate nucleus (MGV) of male and female Mongolian gerbils under two different behavioral contexts: as animals performed an amplitude modulation (AM) detection task and as they were passively exposed to AM sounds. Using a signal detection framework to estimate neurometric sensitivity, we found that neural thresholds in both regions improve during task performance, and this improvement is largely driven by changes in the firing rate rather than phase locking. We also found that ICC and MGV neurometric thresholds improve as animals learn to detect small AM depths during a multiday perceptual training paradigm. Finally, we revealed that in the MGV, but not the ICC, context-dependent enhancements in AM sensitivity grow stronger during perceptual training, mirroring prior observations in the ACX. Together, our results suggest that the auditory midbrain and thalamus contribute to changes in sound processing and perception over rapid and slow timescales.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742
- Department of Biology, University of Maryland, College Park, Maryland 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland 20742
| | - Daniel J Stolzberg
- Department of Biology, University of Maryland, College Park, Maryland 20742
| | - Melissa L Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742
- Department of Biology, University of Maryland, College Park, Maryland 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland 20742
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| |
Collapse
|
2
|
Lee TY, Weissenberger Y, King AJ, Dahmen JC. Midbrain encodes sound detection behavior without auditory cortex. eLife 2024; 12:RP89950. [PMID: 39688376 DOI: 10.7554/elife.89950] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2024] Open
Abstract
Hearing involves analyzing the physical attributes of sounds and integrating the results of this analysis with other sensory, cognitive, and motor variables in order to guide adaptive behavior. The auditory cortex is considered crucial for the integration of acoustic and contextual information and is thought to share the resulting representations with subcortical auditory structures via its vast descending projections. By imaging cellular activity in the corticorecipient shell of the inferior colliculus of mice engaged in a sound detection task, we show that the majority of neurons encode information beyond the physical attributes of the stimulus and that the animals' behavior can be decoded from the activity of those neurons with a high degree of accuracy. Surprisingly, this was also the case in mice in which auditory cortical input to the midbrain had been removed by bilateral cortical lesions. This illustrates that subcortical auditory structures have access to a wealth of non-acoustic information and can, independently of the auditory cortex, carry much richer neural representations than previously thought.
Collapse
Affiliation(s)
- Tai-Ying Lee
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Yves Weissenberger
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
3
|
Mackey CA, Hauser S, Schoenhaut AM, Temghare N, Ramachandran R. Hierarchical differences in the encoding of amplitude modulation in the subcortical auditory system of awake nonhuman primates. J Neurophysiol 2024; 132:1098-1114. [PMID: 39140590 PMCID: PMC11427057 DOI: 10.1152/jn.00329.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 07/31/2024] [Accepted: 08/12/2024] [Indexed: 08/15/2024] Open
Abstract
Sinusoidal amplitude modulation (SAM) is a key feature of complex sounds. Although psychophysical studies have characterized SAM perception, and neurophysiological studies in anesthetized animals report a transformation from the cochlear nucleus' (CN; brainstem) temporal code to the inferior colliculus' (IC; midbrain's) rate code, none have used awake animals or nonhuman primates to compare CN and IC's coding strategies to modulation-frequency perception. To address this, we recorded single-unit responses and compared derived neurometric measures in the CN and IC to psychometric measures of modulation frequency (MF) discrimination in macaques. IC and CN neurons often exhibited tuned responses to SAM in rate and spike-timing measures of modulation coding. Neurometric thresholds spanned a large range (2-200 Hz ΔMF). The lowest 40% of IC thresholds were less than or equal to psychometric thresholds, regardless of which code was used, whereas CN thresholds were greater than psychometric thresholds. Discrimination at 10-20 Hz could be explained by indiscriminately pooling 30 units in either structure, whereas discrimination at higher MFs was best explained by more selective pooling. This suggests that pooled CN activity was sufficient for AM discrimination. Psychometric and neurometric thresholds decreased as stimulus duration increased, but IC and CN thresholds were higher and more variable than behavior at short durations. This slower subcortical temporal integration compared with behavior was consistent with a drift diffusion model that reproduced individual differences in performance and can constrain future neurophysiological studies of temporal integration. These measures provide an account of AM perception at the neurophysiological, computational, and behavioral levels.NEW & NOTEWORTHY In everyday environments, the brain is tasked with extracting information from sound envelopes, which involves both sensory encoding and perceptual decision-making. Different neural codes for envelope representation have been characterized in midbrain and cortex, but studies of brainstem nuclei such as the cochlear nucleus (CN) have usually been conducted under anesthesia in nonprimate species. Here, we found that subcortical activity in awake monkeys and a biologically plausible perceptual decision-making model accounted for sound envelope discrimination behavior.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Samantha Hauser
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
| | - Namrata Temghare
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
4
|
Quass GL, Rogalla MM, Ford AN, Apostolides PF. Mixed Representations of Sound and Action in the Auditory Midbrain. J Neurosci 2024; 44:e1831232024. [PMID: 38918064 PMCID: PMC11270520 DOI: 10.1523/jneurosci.1831-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 06/05/2024] [Accepted: 06/14/2024] [Indexed: 06/27/2024] Open
Abstract
Linking sensory input and its consequences is a fundamental brain operation. During behavior, the neural activity of neocortical and limbic systems often reflects dynamic combinations of sensory and task-dependent variables, and these "mixed representations" are suggested to be important for perception, learning, and plasticity. However, the extent to which such integrative computations might occur outside of the forebrain is less clear. Here, we conduct cellular-resolution two-photon Ca2+ imaging in the superficial "shell" layers of the inferior colliculus (IC), as head-fixed mice of either sex perform a reward-based psychometric auditory task. We find that the activity of individual shell IC neurons jointly reflects auditory cues, mice's actions, and behavioral trial outcomes, such that trajectories of neural population activity diverge depending on mice's behavioral choice. Consequently, simple classifier models trained on shell IC neuron activity can predict trial-by-trial outcomes, even when training data are restricted to neural activity occurring prior to mice's instrumental actions. Thus, in behaving mice, auditory midbrain neurons transmit a population code that reflects a joint representation of sound, actions, and task-dependent variables.
Collapse
Affiliation(s)
- Gunnar L Quass
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Meike M Rogalla
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Alexander N Ford
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
| | - Pierre F Apostolides
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan 48109
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109
| |
Collapse
|
5
|
Ying R, Stolzberg DJ, Caras ML. Neural correlates of flexible sound perception in the auditory midbrain and thalamus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.12.589266. [PMID: 38645241 PMCID: PMC11030403 DOI: 10.1101/2024.04.12.589266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the spectral and temporal sensitivity of auditory cortical neurons support many aspects of flexible listening, the contribution of subcortical auditory regions to this process is less understood. Here, we recorded single- and multi-unit activity from the central nucleus of the inferior colliculus (ICC) and the ventral subdivision of the medial geniculate nucleus (MGV) of Mongolian gerbils under two different behavioral contexts: as animals performed an amplitude modulation (AM) detection task and as they were passively exposed to AM sounds. Using a signal detection framework to estimate neurometric sensitivity, we found that neural thresholds in both regions improved during task performance, and this improvement was driven by changes in firing rate rather than phase locking. We also found that ICC and MGV neurometric thresholds improved and correlated with behavioral performance as animals learn to detect small AM depths during a multi-day perceptual training paradigm. Finally, we reveal that in the MGV, but not the ICC, context-dependent enhancements in AM sensitivity grow stronger during perceptual training, mirroring prior observations in the auditory cortex. Together, our results suggest that the auditory midbrain and thalamus contribute to flexible sound processing and perception over rapid and slow timescales.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Daniel J. Stolzberg
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
6
|
Ford AN, Czarny JE, Rogalla MM, Quass GL, Apostolides PF. Auditory Corticofugal Neurons Transmit Auditory and Non-auditory Information During Behavior. J Neurosci 2024; 44:e1190232023. [PMID: 38123993 PMCID: PMC10869159 DOI: 10.1523/jneurosci.1190-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/08/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Layer 5 pyramidal neurons of sensory cortices project "corticofugal" axons to myriad sub-cortical targets, thereby broadcasting high-level signals important for perception and learning. Recent studies suggest dendritic Ca2+ spikes as key biophysical mechanisms supporting corticofugal neuron function: these long-lasting events drive burst firing, thereby initiating uniquely powerful signals to modulate sub-cortical representations and trigger learning-related plasticity. However, the behavioral relevance of corticofugal dendritic spikes is poorly understood. We shed light on this issue using 2-photon Ca2+ imaging of auditory corticofugal dendrites as mice of either sex engage in a GO/NO-GO sound-discrimination task. Unexpectedly, only a minority of dendritic spikes were triggered by behaviorally relevant sounds under our conditions. Task related dendritic activity instead mostly followed sound cue termination and co-occurred with mice's instrumental licking during the answer period of behavioral trials, irrespective of reward consumption. Temporally selective, optogenetic silencing of corticofugal neurons during the trial answer period impaired auditory discrimination learning. Thus, auditory corticofugal systems' contribution to learning and plasticity may be partially nonsensory in nature.
Collapse
Affiliation(s)
- Alexander N Ford
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Jordyn E Czarny
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Meike M Rogalla
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Gunnar L Quass
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
| | - Pierre F Apostolides
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Institute, Ann Arbor, Michigan 48109
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109
| |
Collapse
|
7
|
Quass GL, Rogalla MM, Ford AN, Apostolides PF. Mixed representations of sound and action in the auditory midbrain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.19.558449. [PMID: 37786676 PMCID: PMC10541616 DOI: 10.1101/2023.09.19.558449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Linking sensory input and its consequences is a fundamental brain operation. Accordingly, neural activity of neo-cortical and limbic systems often reflects dynamic combinations of sensory and behaviorally relevant variables, and these "mixed representations" are suggested to be important for perception, learning, and plasticity. However, the extent to which such integrative computations might occur in brain regions upstream of the forebrain is less clear. Here, we conduct cellular-resolution 2-photon Ca2+ imaging in the superficial "shell" layers of the inferior colliculus (IC), as head-fixed mice of either sex perform a reward-based psychometric auditory task. We find that the activity of individual shell IC neurons jointly reflects auditory cues and mice's actions, such that trajectories of neural population activity diverge depending on mice's behavioral choice. Consequently, simple classifier models trained on shell IC neuron activity can predict trial-by-trial outcomes, even when training data are restricted to neural activity occurring prior to mice's instrumental actions. Thus in behaving animals, auditory midbrain neurons transmit a population code that reflects a joint representation of sound and action.
Collapse
Affiliation(s)
- GL Quass
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - MM Rogalla
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - AN Ford
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| | - PF Apostolides
- Kresge Hearing Research Institute, Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan 48109, United States
| |
Collapse
|
8
|
Hancock KE, Delgutte B. Neural coding of dichotic pitches in auditory midbrain. J Neurophysiol 2023; 129:872-893. [PMID: 36921210 PMCID: PMC10085564 DOI: 10.1152/jn.00511.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/24/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023] Open
Abstract
Dichotic pitches such as the Huggins pitch (HP) and the binaural edge pitch (BEP) are perceptual illusions whereby binaural noise that exhibits abrupt changes in interaural phase differences (IPDs) across frequency creates a tonelike pitch percept when presented to both ears, even though it does not produce a pitch when presented monaurally. At the perceptual and cortical levels, dichotic pitches behave as if an actual tone had been presented to the ears, yet investigations of neural correlates of dichotic pitch in single-unit responses at subcortical levels are lacking. We tested for cues to HP and BEP in the responses of binaural neurons in the auditory midbrain of anesthetized cats by varying the expected pitch frequency around each neuron's best frequency (BF). Neuronal firing rates showed specific features (peaks, troughs, or edges) when the pitch frequency crossed the BF, and the type of feature was consistent with a well-established model of binaural processing comprising frequency tuning, internal delays, and firing rates sensitive to interaural correlation. A Jeffress-like neural population model in which the behavior of individual neurons was governed by the cross-correlation model and the neurons were independently distributed along BF and best IPD predicted trends in human psychophysical HP detection but only when the model incorporated physiological BF and best IPD distributions. These results demonstrate the existence of a rate-place code for HP and BEP in the auditory midbrain and provide a firm physiological basis for models of dichotic pitches.NEW & NOTEWORTHY Dichotic pitches are perceptual illusions created centrally through binaural interactions that offer an opportunity to test theories of pitch and binaural hearing. Here we show that binaural neurons in auditory midbrain encode the frequency of two salient types of dichotic pitches via specific features in the pattern of firing rates along the tonotopic axis. This is the first combined single-unit and modeling study of responses of auditory neurons to stimuli evoking a dichotic pitch.
Collapse
Affiliation(s)
- Kenneth E Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts, United States
- Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts, United States
- Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
9
|
Mackey CA, Dylla M, Bohlen P, Grigsby J, Hrnicek A, Mayfield J, Ramachandran R. Hierarchical differences in the encoding of sound and choice in the subcortical auditory system. J Neurophysiol 2023; 129:591-608. [PMID: 36651913 PMCID: PMC9988536 DOI: 10.1152/jn.00439.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 01/03/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023] Open
Abstract
Detection of sounds is a fundamental function of the auditory system. Although studies of auditory cortex have gained substantial insight into detection performance using behaving animals, previous subcortical studies have mostly taken place under anesthesia, in passively listening animals, or have not measured performance at threshold. These limitations preclude direct comparisons between neuronal responses and behavior. To address this, we simultaneously measured auditory detection performance and single-unit activity in the inferior colliculus (IC) and cochlear nucleus (CN) in macaques. The spontaneous activity and response variability of CN neurons were higher than those observed for IC neurons. Signal detection theoretic methods revealed that the magnitude of responses of IC neurons provided more reliable estimates of psychometric threshold and slope compared with the responses of single CN neurons. However, pooling small populations of CN neurons provided reliable estimates of psychometric threshold and slope, suggesting sufficient information in CN population activity. Trial-by-trial correlations between spike count and behavioral response emerged 50-75 ms after sound onset for most IC neurons, but for few neurons in the CN. These results highlight hierarchical differences between neurometric-psychometric correlations in CN and IC and have important implications for how subcortical information could be decoded.NEW & NOTEWORTHY The cerebral cortex is widely recognized to play a role in sensory processing and decision-making. Accounts of the neural basis of auditory perception and its dysfunction are based on this idea. However, significantly less attention has been paid to midbrain and brainstem structures in this regard. Here, we find that subcortical auditory neurons represent stimulus information sufficient for detection and predict behavioral choice on a trial-by-trial basis.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Margit Dylla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Peter Bohlen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Jason Grigsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Andrew Hrnicek
- Department of Neurobiology and Anatomy, Wake Forest University Health Sciences, Winston-Salem, North Carolina, United States
| | - Jackson Mayfield
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
10
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
11
|
Cheng FY, Xu C, Gold L, Smith S. Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech. Front Neurosci 2022; 15:747303. [PMID: 34987356 PMCID: PMC8721138 DOI: 10.3389/fnins.2021.747303] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 12/02/2021] [Indexed: 01/15/2023] Open
Abstract
The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWSbefore and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.
Collapse
Affiliation(s)
- Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Lisa Gold
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
12
|
O'Reilly JA. Roving oddball paradigm elicits sensory gating, frequency sensitivity, and long-latency response in common marmosets. IBRO Neurosci Rep 2021; 11:128-136. [PMID: 34622244 PMCID: PMC8482433 DOI: 10.1016/j.ibneur.2021.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/21/2021] [Accepted: 09/18/2021] [Indexed: 12/17/2022] Open
Abstract
Mismatch negativity (MMN) is a candidate biomarker for neuropsychiatric disease. Understanding the extent to which it reflects cognitive deviance-detection or purely sensory processes will assist practitioners in making informed clinical interpretations. This study compares the utility of deviance-detection and sensory-processing theories for describing MMN-like auditory responses of a common marmoset monkey during roving oddball stimulation. The following exploratory analyses were performed on an existing dataset: responses during the transition and repetition sequence of the roving oddball paradigm (standard -> deviant/S1 -> S2 -> S3) were compared; long-latency potentials evoked by deviant stimuli were examined using a double-epoch waveform subtraction; effects of increasing stimulus repetitions on standard and deviant responses were analyzed; and transitions between standard and deviant stimuli were divided into ascending and descending frequency changes to explore contributions of frequency-sensitivity. An enlarged auditory response to deviant stimuli was observed. This decreased exponentially with stimulus repetition, characteristic of sensory gating. A slow positive deflection was viewed over approximately 300–800 ms after the deviant stimulus, which is more difficult to ascribe to afferent sensory mechanisms. When split into ascending and descending frequency transitions, the resulting difference waveforms were disproportionally influenced by descending frequency deviant stimuli. This asymmetry is inconsistent with the general deviance-detection theory of MMN. These findings tentatively suggest that MMN-like responses from common marmosets are predominantly influenced by rapid sensory adaptation and frequency preference of the auditory cortex, while deviance-detection may play a role in long-latency activity.
Collapse
Affiliation(s)
- Jamie A O'Reilly
- College of Biomedical Engineering, Rangsit University, 52/347 Muang-Ake, Phaholyothin Road, Pathumthani 12000, Thailand
| |
Collapse
|
13
|
Hernández-Pérez H, Mikiel-Hunter J, McAlpine D, Dhar S, Boothalingam S, Monaghan JJM, McMahon CM. Understanding degraded speech leads to perceptual gating of a brainstem reflex in human listeners. PLoS Biol 2021; 19:e3001439. [PMID: 34669696 PMCID: PMC8559948 DOI: 10.1371/journal.pbio.3001439] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 11/01/2021] [Accepted: 10/07/2021] [Indexed: 11/19/2022] Open
Abstract
The ability to navigate "cocktail party" situations by focusing on sounds of interest over irrelevant, background sounds is often considered in terms of cortical mechanisms. However, subcortical circuits such as the pathway underlying the medial olivocochlear (MOC) reflex modulate the activity of the inner ear itself, supporting the extraction of salient features from auditory scene prior to any cortical processing. To understand the contribution of auditory subcortical nuclei and the cochlea in complex listening tasks, we made physiological recordings along the auditory pathway while listeners engaged in detecting non(sense) words in lists of words. Both naturally spoken and intrinsically noisy, vocoded speech-filtering that mimics processing by a cochlear implant (CI)-significantly activated the MOC reflex, but this was not the case for speech in background noise, which more engaged midbrain and cortical resources. A model of the initial stages of auditory processing reproduced specific effects of each form of speech degradation, providing a rationale for goal-directed gating of the MOC reflex based on enhancing the representation of the energy envelope of the acoustic waveform. Our data reveal the coexistence of 2 strategies in the auditory system that may facilitate speech understanding in situations where the signal is either intrinsically degraded or masked by extrinsic acoustic energy. Whereas intrinsically degraded streams recruit the MOC reflex to improve representation of speech cues peripherally, extrinsically masked streams rely more on higher auditory centres to denoise signals.
Collapse
Affiliation(s)
- Heivet Hernández-Pérez
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - Jason Mikiel-Hunter
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - David McAlpine
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - Sumitrajit Dhar
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Sriram Boothalingam
- University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Jessica J. M. Monaghan
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
- National Acoustic Laboratories, Sydney, Australia
| | - Catherine M. McMahon
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
| |
Collapse
|
14
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R. Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
15
|
Robustness to Noise in the Auditory System: A Distributed and Predictable Property. eNeuro 2021; 8:ENEURO.0043-21.2021. [PMID: 33632813 PMCID: PMC7986545 DOI: 10.1523/eneuro.0043-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/17/2021] [Accepted: 02/17/2021] [Indexed: 12/30/2022] Open
Abstract
Background noise strongly penalizes auditory perception of speech in humans or vocalizations in animals. Despite this, auditory neurons are still able to detect communications sounds against considerable levels of background noise. We collected neuronal recordings in cochlear nucleus (CN), inferior colliculus (IC), auditory thalamus, and primary and secondary auditory cortex in response to vocalizations presented either against a stationary or a chorus noise in anesthetized guinea pigs at three signal-to-noise ratios (SNRs; −10, 0, and 10 dB). We provide evidence that, at each level of the auditory system, five behaviors in noise exist within a continuum, from neurons with high-fidelity representations of the signal, mostly found in IC and thalamus, to neurons with high-fidelity representations of the noise, mostly found in CN for the stationary noise and in similar proportions in each structure for the chorus noise. The two cortical areas displayed fewer robust responses than the IC and thalamus. Furthermore, between 21% and 72% of the neurons (depending on the structure) switch categories from one background noise to another, even if the initial assignment of these neurons to a category was confirmed by a severe bootstrap procedure. Importantly, supervised learning pointed out that assigning a recording to one of the five categories can be predicted up to a maximum of 70% based on both the response to signal alone and noise alone.
Collapse
|