1
|
Johnson JS, Niwa M, O'Connor KN, Malone BJ, Sutter ML. Hierarchical emergence of opponent coding in auditory belt cortex. J Neurophysiol 2025; 133:944-964. [PMID: 39963949 DOI: 10.1152/jn.00519.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 11/20/2024] [Accepted: 02/12/2025] [Indexed: 03/11/2025] Open
Abstract
We recorded from neurons in primary auditory cortex (A1) and middle-lateral belt area (ML) while rhesus macaques either discriminated amplitude-modulated noise (AM) from unmodulated noise or passively heard the same stimuli. We used several post hoc pooling models to investigate the ability of auditory cortex to leverage population coding for AM detection. We find that pooled-response AM detection is better in the active condition than the passive condition, and better using rate-based coding than synchrony-based coding. Neurons can be segregated into two classes based on whether they increase (INC) or decrease (DEC) their firing rate in response to increasing modulation depth. In these samples, A1 had relatively fewer DEC neurons (26%) than ML (45%). When responses were pooled without segregating these classes, AM detection using rate-based coding was much better in A1 than in ML, but when pooling only INC neurons, AM detection in ML approached that found in A1. Pooling only DEC neurons resulted in impaired AM detection in both areas. To investigate the role of DEC neurons, we devised two pooling methods that opposed DEC and INC neurons-a direct subtractive method and a two-pool push-pull opponent method. Only the push-pull opponent method resulted in superior AM detection relative to indiscriminate pooling. In the active condition, the opponent method was superior to pooling only INC neurons during the late portion of the response in ML. These results suggest that the increasing prevalence of the DEC response type in ML can be leveraged by appropriate methods to improve AM detection.NEW & NOTEWORTHY We used several post hoc pooling models to investigate the ability of primate auditory cortex to leverage population coding in the detection of amplitude-modulated sounds. When cells are indiscriminately pooled, primary auditory cortex (A1) detects amplitude-modulated sounds better than middle-lateral belt (ML). When cells that decrease firing rate with increasing modulation depth are excluded, or used in a push-pull opponent fashion, detection is similar in the two areas, and macaque behavior can be approximated using reasonably sized pools.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California at Davis, Davis, California, United States
| | - Mamiko Niwa
- Center for Neuroscience, University of California at Davis, Davis, California, United States
| | - Kevin N O'Connor
- Center for Neuroscience, University of California at Davis, Davis, California, United States
- Department of Neurobiology, Physiology and Behavior, University of California at Davis, Davis, California, United States
| | - Brian J Malone
- Department of Neurobiology, Physiology and Behavior, University of California at Davis, Davis, California, United States
| | - Mitchell L Sutter
- Center for Neuroscience, University of California at Davis, Davis, California, United States
- Department of Neurobiology, Physiology and Behavior, University of California at Davis, Davis, California, United States
| |
Collapse
|
2
|
van den Berg MM, Busscher E, Borst JGG, Wong AB. Neuronal responses in mouse inferior colliculus correlate with behavioral detection of amplitude-modulated sound. J Neurophysiol 2023; 130:524-546. [PMID: 37465872 DOI: 10.1152/jn.00048.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, including speech and animal vocalizations. Here, we used operant conditioning and in vivo electrophysiology to determine the AM detection threshold of mice as well as its underlying neuronal encoding. Mice were trained in a Go-NoGo task to detect the transition to AM within a noise stimulus designed to prevent the use of spectral side-bands or a change in intensity as alternative cues. Our results indicate that mice, compared with other species, detect high modulation frequencies up to 512 Hz well, but show much poorer performance at low frequencies. Our in vivo multielectrode recordings in the inferior colliculus (IC) of both anesthetized and awake mice revealed a few single units with remarkable phase-locking ability to 512 Hz modulation, but not sufficient to explain the good behavioral detection at that frequency. Using a model of the population response that combined dimensionality reduction with threshold detection, we reproduced the general band-pass characteristics of behavioral detection based on a subset of neurons showing the largest firing rate change (both increase and decrease) in response to AM, suggesting that these neurons are instrumental in the behavioral detection of AM stimuli by the mice.NEW & NOTEWORTHY The amplitude of natural sounds, including speech and animal vocalizations, often shows characteristic modulations. We examined the relationship between neuronal responses in the mouse inferior colliculus and the behavioral detection of amplitude modulation (AM) in sound and modeled how the former can give rise to the latter. Our model suggests that behavioral detection can be well explained by the activity of a subset of neurons showing the largest firing rate changes in response to AM.
Collapse
Affiliation(s)
- Maurits M van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Esmée Busscher
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - J Gerard G Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Aaron B Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
3
|
Bigelow J, Morrill RJ, Olsen T, Hasenstaub AR. Visual modulation of firing and spectrotemporal receptive fields in mouse auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100040. [PMID: 36518337 PMCID: PMC9743056 DOI: 10.1016/j.crneur.2022.100040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 04/26/2022] [Accepted: 05/06/2022] [Indexed: 10/18/2022] Open
Abstract
Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for cognitive processes such as communication and spatial perception. These studies have raised two important questions: First, which cell populations in A1 respond to visual input and/or are influenced by visual context? Second, which aspects of sound encoding are affected by visual context? To address these questions, we recorded single-unit activity across cortical layers in awake mice during exposure to auditory and visual stimuli. Neurons responsive to visual stimuli were most prevalent in the deep cortical layers and included both excitatory and inhibitory cells. The overwhelming majority of these neurons also responded to sound, indicating unimodal visual neurons are rare in A1. Other neurons for which sound-evoked responses were modulated by visual context were similarly excitatory or inhibitory but more evenly distributed across cortical layers. These modulatory influences almost exclusively affected sustained sound-evoked firing rate (FR) responses or spectrotemporal receptive fields (STRFs); transient FR changes at stimulus onset were rarely modified by visual context. Neuron populations with visually modulated STRFs and sustained FR responses were mostly non-overlapping, suggesting spectrotemporal feature selectivity and overall excitability may be differentially sensitive to visual context. The effects of visual modulation were heterogeneous, increasing and decreasing STRF gain in roughly equal proportions of neurons. Our results indicate visual influences are surprisingly common and diversely expressed throughout layers and cell types in A1, affecting nearly one in five neurons overall.
Collapse
Affiliation(s)
- James Bigelow
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Ryan J. Morrill
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Timothy Olsen
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Andrea R. Hasenstaub
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| |
Collapse
|
4
|
Kommajosyula SP, Bartlett EL, Cai R, Ling L, Caspary DM. Corticothalamic projections deliver enhanced responses to medial geniculate body as a function of the temporal reliability of the stimulus. J Physiol 2021; 599:5465-5484. [PMID: 34783016 PMCID: PMC10630908 DOI: 10.1113/jp282321] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 01/12/2023] Open
Abstract
Ageing and challenging signal-in-noise conditions are known to engage the use of cortical resources to help maintain speech understanding. Extensive corticothalamic projections are thought to provide attentional, mnemonic and cognitive-related inputs in support of sensory inferior colliculus (IC) inputs to the medial geniculate body (MGB). Here we show that a decrease in modulation depth, a temporally less distinct periodic acoustic signal, leads to a jittered ascending temporal code, changing MGB unit responses from adapting responses to responses showing repetition enhancement, posited to aid identification of important communication and environmental sounds. Young-adult male Fischer Brown Norway rats, injected with the inhibitory opsin archaerhodopsin T (ArchT) into the primary auditory cortex (A1), were subsequently studied using optetrodes to record single-units in MGB. Decreasing the modulation depth of acoustic stimuli significantly increased repetition enhancement. Repetition enhancement was blocked by optical inactivation of corticothalamic terminals in MGB. These data support a role for corticothalamic projections in repetition enhancement, implying that predictive anticipation could be used to improve neural representation of weakly modulated sounds. KEY POINTS: In response to a less temporally distinct repeating sound with low modulation depth, medial geniculate body (MGB) single units show a switch from adaptation towards repetition enhancement. Repetition enhancement was reversed by blockade of MGB inputs from the auditory cortex. Collectively, these data argue that diminished acoustic temporal cues such as weak modulation engage cortical processes to enhance coding of those cues in auditory thalamus.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Edward L Bartlett
- Department of Biological Sciences and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Rui Cai
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Lynne Ling
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Donald M Caspary
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| |
Collapse
|
5
|
Downer JD, Bigelow J, Runfeldt MJ, Malone BJ. Temporally precise population coding of dynamic sounds by auditory cortex. J Neurophysiol 2021; 126:148-169. [PMID: 34077273 DOI: 10.1152/jn.00709.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.NEW & NOTEWORTHY Fundamental questions remain about population coding in primary auditory cortex (A1). In particular, issues of spike timing in models of neural populations have been largely ignored. We find that spike-timing in response to sound envelope fluctuations is highly similar across neuron populations in A1. This property of shared envelope phase preference allows for a simple population model involving unweighted convergence of neuronal responses to classify amplitude modulation frequencies with high accuracy.
Collapse
Affiliation(s)
- Joshua D Downer
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - James Bigelow
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Melissa J Runfeldt
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, California
| |
Collapse
|
6
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
7
|
Bigelow J, Malone B. Extracellular voltage thresholds for maximizing information extraction in primate auditory cortex: implications for a brain computer interface. J Neural Eng 2020; 18. [PMID: 32126540 DOI: 10.1088/1741-2552/ab7c19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Accepted: 03/03/2020] [Indexed: 01/08/2023]
Abstract
OBJECTIVE Research by Oby et al (2016) demonstrated that the optimal threshold for extracting information from visual and motor cortices may differ from the optimal threshold for identifying single neurons via spike sorting methods. The optimal threshold for extracting information from auditory cortex has yet to be identified, nor has the optimal temporal scale for representing auditory cortical activity. Here, we describe a procedure to jointly optimize the extracellular threshold and bin size with respect to the decoding accuracy achieved by a linear classifier for a diverse set of auditory stimuli. APPROACH We used linear multichannel arrays to record extracellular neural activity from the auditory cortex of awake squirrel monkeys passively listening to both simple and complex sounds. We executed a grid search of the coordinate space defined by the voltage threshold (in units of standard deviation) and the bin size (in units of milliseconds), and computed decoding accuracy at each point. MAIN RESULTS The optimal threshold for information extraction was consistently near two standard deviations below the voltage trace mean, which falls significantly below the range of three to five standard deviations typically used as inputs to spike sorting algorithms in basic research and in brain-computer interface (BCI) applications. The optimal binwidth was minimized at the optimal voltage threshold, particularly for acoustic stimuli dominated by temporally dynamic features, indicating that permissive thresholding permits readout of cortical responses with temporal precision on the order of a few milliseconds. SIGNIFICANCE The improvements in decoding accuracy we observed for optimal readout parameters suggest that standard thresholding methods substantially underestimate the information present in auditory cortical spiking patterns. The fact that optimal thresholds were relatively low indicates that local populations of cortical neurons exhibit high temporal coherence that could be leveraged in service of future auditory BCI applications.
Collapse
Affiliation(s)
- James Bigelow
- OHNS, University of California System, San Francisco, California, UNITED STATES
| | - Brian Malone
- OHNS, University of California System, 675 Nelson Rising Lane (Room 535), University of California San Francisco, San Francisco, San Francisco, California, 94158, UNITED STATES
| |
Collapse
|
8
|
Oganian Y, Chang EF. A speech envelope landmark for syllable encoding in human superior temporal gyrus. SCIENCE ADVANCES 2019; 5:eaay6279. [PMID: 31976369 PMCID: PMC6957234 DOI: 10.1126/sciadv.aay6279] [Citation(s) in RCA: 94] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 09/16/2019] [Indexed: 05/13/2023]
Abstract
The most salient acoustic features in speech are the modulations in its intensity, captured by the amplitude envelope. Perceptually, the envelope is necessary for speech comprehension. Yet, the neural computations that represent the envelope and their linguistic implications are heavily debated. We used high-density intracranial recordings, while participants listened to speech, to determine how the envelope is represented in human speech cortical areas on the superior temporal gyrus (STG). We found that a well-defined zone in middle STG detects acoustic onset edges (local maxima in the envelope rate of change). Acoustic analyses demonstrated that timing of acoustic onset edges cues syllabic nucleus onsets, while their slope cues syllabic stress. Synthesized amplitude-modulated tone stimuli showed that steeper slopes elicited greater responses, confirming cortical encoding of amplitude change, not absolute amplitude. Overall, STG encoding of the timing and magnitude of acoustic onset edges underlies the perception of speech temporal structure.
Collapse
|
9
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
10
|
Metzen MG, Huang CG, Chacron MJ. Descending pathways generate perception of and neural responses to weak sensory input. PLoS Biol 2018; 16:e2005239. [PMID: 29939982 PMCID: PMC6040869 DOI: 10.1371/journal.pbio.2005239] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 07/11/2018] [Accepted: 06/12/2018] [Indexed: 01/24/2023] Open
Abstract
Natural sensory stimuli frequently consist of a fast time-varying waveform whose amplitude or contrast varies more slowly. While changes in contrast carry behaviorally relevant information necessary for sensory perception, their processing by the brain remains poorly understood to this day. Here, we investigated the mechanisms that enable neural responses to and perception of low-contrast stimuli in the electrosensory system of the weakly electric fish Apteronotus leptorhynchus. We found that fish reliably detected such stimuli via robust behavioral responses. Recordings from peripheral electrosensory neurons revealed stimulus-induced changes in firing activity (i.e., phase locking) but not in their overall firing rate. However, central electrosensory neurons receiving input from the periphery responded robustly via both phase locking and increases in firing rate. Pharmacological inactivation of feedback input onto central electrosensory neurons eliminated increases in firing rate but did not affect phase locking for central electrosensory neurons in response to low-contrast stimuli. As feedback inactivation eliminated behavioral responses to these stimuli as well, our results show that it is changes in central electrosensory neuron firing rate that are relevant for behavior, rather than phase locking. Finally, recordings from neurons projecting directly via feedback to central electrosensory neurons revealed that they provide the necessary input to cause increases in firing rate. Our results thus provide the first experimental evidence that feedback generates both neural and behavioral responses to low-contrast stimuli that are commonly found in the natural environment. Feedback input from more central to more peripheral brain areas is found ubiquitously in the central nervous system of vertebrates. In this study, we used a combination of electrophysiological, behavioral, and pharmacological approaches to reveal a novel function for feedback pathways in generating neural and behavioral responses to weak sensory input in the weakly electric fish. We first determined that weak sensory input gives rise to responses that are phase locked in both peripheral sensory neurons and in the central neurons that are their downstream targets. However, central neurons also responded to weak sensory inputs that were not relayed via a feedforward input from the periphery, because complete inactivation of the feedback pathway abolished increases in firing rate but not the phase locking in response to weak sensory input. Because such inactivation also abolished the behavioral responses, our results show that the increases in firing rate in central neurons, and not the phase locking, are decoded downstream to give rise to perception. Finally, we discovered that the neurons providing feedback input were also activated by weak sensory input, thereby offering further evidence that feedback is necessary to elicit increases in firing rate that are needed for perception.
Collapse
Affiliation(s)
- Michael G. Metzen
- Department of Physiology, McGill University, Montreal, Quebec, Canada
| | - Chengjie G. Huang
- Department of Physiology, McGill University, Montreal, Quebec, Canada
| | - Maurice J. Chacron
- Department of Physiology, McGill University, Montreal, Quebec, Canada
- * E-mail:
| |
Collapse
|
11
|
Hoglen NEG, Larimer P, Phillips EAK, Malone BJ, Hasenstaub AR. Amplitude modulation coding in awake mice and squirrel monkeys. J Neurophysiol 2018; 119:1753-1766. [PMID: 29364073 DOI: 10.1152/jn.00101.2017] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Both mice and primates are used to model the human auditory system. The primate order possesses unique cortical specializations that govern auditory processing. Given the power of molecular and genetic tools available in the mouse model, it is essential to understand the similarities and differences in auditory cortical processing between mice and primates. To address this issue, we directly compared temporal encoding properties of neurons in the auditory cortex of awake mice and awake squirrel monkeys (SQMs). Stimuli were drawn from a sinusoidal amplitude modulation (SAM) paradigm, which has been used previously both to characterize temporal precision and to model the envelopes of natural sounds. Neural responses were analyzed with linear template-based decoders. In both species, spike timing information supported better modulation frequency discrimination than rate information, and multiunit responses generally supported more accurate discrimination than single-unit responses from the same site. However, cortical responses in SQMs supported better discrimination overall, reflecting superior temporal precision and greater rate modulation relative to the spontaneous baseline and suggesting that spiking activity in mouse cortex was less strictly regimented by incoming acoustic information. The quantitative differences we observed between SQM and mouse cortex support the idea that SQMs offer advantages for modeling precise responses to fast envelope dynamics relevant to human auditory processing. Nevertheless, our results indicate that cortical temporal processing is qualitatively similar in mice and SQMs and thus recommend the mouse model for mechanistic questions, such as development and circuit function, where its substantial methodological advantages can be exploited. NEW & NOTEWORTHY To understand the advantages of different model organisms, it is necessary to directly compare sensory responses across species. Contrasting temporal processing in auditory cortex of awake squirrel monkeys and mice, with parametrically matched amplitude-modulated tone stimuli, reveals a similar role of timing information in stimulus encoding. However, disparities in response precision and strength suggest that anatomical and biophysical differences between squirrel monkeys and mice produce quantitative but not qualitative differences in processing strategy.
Collapse
Affiliation(s)
- Nerissa E G Hoglen
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California.,Department of Psychiatry, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Phillip Larimer
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Department of Neurology, University of California , San Francisco, California
| | - Elizabeth A K Phillips
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| | - Andrea R Hasenstaub
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| |
Collapse
|
12
|
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates. PLoS One 2017; 12:e0183914. [PMID: 28877194 PMCID: PMC5587334 DOI: 10.1371/journal.pone.0183914] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 08/14/2017] [Indexed: 11/19/2022] Open
Abstract
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis.
Collapse
|
13
|
Malone BJ, Heiser MA, Beitel RE, Schreiner CE. Background noise exerts diverse effects on the cortical encoding of foreground sounds. J Neurophysiol 2017; 118:1034-1054. [PMID: 28490644 PMCID: PMC5547268 DOI: 10.1152/jn.00152.2017] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Revised: 05/05/2017] [Accepted: 05/05/2017] [Indexed: 11/22/2022] Open
Abstract
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions.NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise.
Collapse
Affiliation(s)
- B J Malone
- Coleman Memorial Laboratory, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Marc A Heiser
- Department of Psychiatry, Child and Adolescent Division, UCLA Semel Institute for Neuroscience and Behavior, Los Angeles, California
| | - Ralph E Beitel
- Coleman Memorial Laboratory, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California.,Center for Integrative Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California; and.,Departments of Bioengineering & Therapeutic Sciences and Physiology, University of California, San Francisco, California
| |
Collapse
|
14
|
Downer JD, Niwa M, Sutter ML. Hierarchical differences in population coding within auditory cortex. J Neurophysiol 2017; 118:717-731. [PMID: 28446588 PMCID: PMC5539454 DOI: 10.1152/jn.00899.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 01/04/2023] Open
Abstract
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation (rnoise) between simultaneously recorded neurons and found that whereas engagement decreased average rnoise in A1, engagement increased average rnoise in ML. This finding surprised us, because attentive states are commonly reported to decrease average rnoise We analyzed the effect of rnoise on AM coding in both A1 and ML and found that whereas engagement-related shifts in rnoise in A1 enhance AM coding, rnoise shifts in ML have little effect. These results imply that the effect of rnoise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing rnoise Therefore, the hierarchical emergence of rnoise-robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity.NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
15
|
Abstract
UNLABELLED The neural mechanisms that support the robust processing of acoustic signals in the presence of background noise in the auditory system remain largely unresolved. Psychophysical experiments have shown that signal detection is influenced by the signal-to-noise ratio (SNR) and the overall stimulus level, but this relationship has not been fully characterized. We evaluated the neural representation of frequency in rat primary auditory cortex by constructing tonal frequency response areas (FRAs) in primary auditory cortex for different SNRs, tone levels, and noise levels. We show that response strength and selectivity for frequency and sound level depend on interactions between SNRs and tone levels. At low SNRs, jointly increasing the tone and noise levels reduced firing rates and narrowed FRA bandwidths; at higher SNRs, however, increasing the tone and noise levels increased firing rates and expanded bandwidths, as is usually seen for FRAs obtained without background noise. These changes in frequency and intensity tuning decreased tone level and tone frequency discriminability at low SNRs. By contrast, neither response onset latencies nor noise-driven steady-state firing rates meaningfully interacted with SNRs or overall sound levels. Speech detection performance in humans was also shown to depend on the interaction between overall sound level and SNR. Together, these results indicate that signal processing difficulties imposed by high noise levels are quite general and suggest that the neurophysiological changes we see for simple sounds generalize to more complex stimuli. SIGNIFICANCE STATEMENT Effective processing of sounds in background noise is an important feature of the mammalian auditory system and a necessary feature for successful hearing in many listening conditions. Even mild hearing loss strongly affects this ability in humans, seriously degrading the ability to communicate. The mechanisms involved in achieving high performance in background noise are not well understood. We investigated the effects of SNR and overall stimulus level on the frequency tuning of neurons in rat primary auditory cortex. We found that the effects of noise on frequency selectivity are not determined solely by the SNR but depend also on the levels of the foreground tones and background noise. These observations can lead to improvement in therapeutic approaches for hearing-impaired patients.
Collapse
|
16
|
Willmore BDB, Schoppe O, King AJ, Schnupp JWH, Harper NS. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing. J Neurosci 2016; 36:280-9. [PMID: 26758822 PMCID: PMC4710761 DOI: 10.1523/jneurosci.2441-15.2016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 11/03/2015] [Accepted: 11/10/2015] [Indexed: 11/21/2022] Open
Abstract
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear-nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too.
Collapse
Affiliation(s)
- Ben D B Willmore
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Oliver Schoppe
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and Bio-Inspired Information Processing, Technische Universität München, 85748 Garching, Germany
| | - Andrew J King
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Jan W H Schnupp
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| | - Nicol S Harper
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, United Kingdom, and
| |
Collapse
|
17
|
Abstract
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.
Collapse
|
18
|
Abstract
Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical "modulation masking," in which the presentation of a modulated "masker" signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation.
Collapse
|
19
|
Bibikov NG. Some features of the sound-signal envelope extracted by cochlear nucleus neurons in grass frog. Biophysics (Nagoya-shi) 2015. [DOI: 10.1134/s0006350915030045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
20
|
Malone BJ, Scott BH, Semple MN. Diverse cortical codes for scene segmentation in primate auditory cortex. J Neurophysiol 2015; 113:2934-52. [PMID: 25695655 DOI: 10.1152/jn.01054.2014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 02/04/2015] [Indexed: 11/22/2022] Open
Abstract
The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.
Collapse
Affiliation(s)
- Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland; and
| | - Malcolm N Semple
- Center for Neural Science at New York University, New York, New York
| |
Collapse
|
21
|
|
22
|
Schreiner CE, Malone BJ. Representation of loudness in the auditory cortex. HANDBOOK OF CLINICAL NEUROLOGY 2015; 129:73-84. [PMID: 25726263 DOI: 10.1016/b978-0-444-62630-1.00004-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Changes in stimulus intensity are reflected in changes in the fundamental perceptual attribute of loudness. Stimulus intensity changes also profoundly impact the evoked neural responses throughout the auditory system. A fundamental question is how measurements of neural activity, from the single-neuron level to mass-activity metrics such as functional magnetic resonance imaging or magnetoencephalography, reflect the physical properties of stimulus intensity as opposed to perceived loudness. In this chapter we discuss findings from psychophysics and animal neurophysiology as well as human brain activity measurements to clarify our current understanding of the neural mechanisms that contribute to the perceptual correlate of stimulus intensity.
Collapse
Affiliation(s)
- Christoph E Schreiner
- Center for Integrative Neuroscience and Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA.
| | - Brian J Malone
- Center for Integrative Neuroscience and Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
23
|
Tabansky I, Quinkert AW, Rahman N, Muller SZ, Lofgren J, Rudling J, Goodman A, Wang Y, Pfaff DW. Temporally-patterned deep brain stimulation in a mouse model of multiple traumatic brain injury. Behav Brain Res 2014; 273:123-32. [PMID: 25072520 DOI: 10.1016/j.bbr.2014.07.026] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Revised: 07/16/2014] [Accepted: 07/19/2014] [Indexed: 10/25/2022]
Abstract
We report that mice with closed-head multiple traumatic brain injury (TBI) show a decrease in the motoric aspects of generalized arousal, as measured by automated, quantitative behavioral assays. Further, we found that temporally-patterned deep brain stimulation (DBS) can increase generalized arousal and spontaneous motor activity in this mouse model of TBI. This arousal increase is input-pattern-dependent, as changing the temporal pattern of DBS can modulate its effect on motor activity. Finally, an extensive examination of mouse behavioral capacities, looking for deficits in this model of TBI, suggest that the strongest effects of TBI in this model are found in the initiation of any kind of movement.
Collapse
Affiliation(s)
- Inna Tabansky
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States.
| | - Amy Wells Quinkert
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| | - Nadera Rahman
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| | - Salomon Zev Muller
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| | - Jesper Lofgren
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States; Linkoping University, Faculty of Health Sciences, Hälsouniversitetet Kansliet 581 83 Linköping, Sweden
| | - Johan Rudling
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States; Linkoping University, Faculty of Health Sciences, Hälsouniversitetet Kansliet 581 83 Linköping, Sweden
| | - Alyssa Goodman
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| | - Yingping Wang
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| | - Donald W Pfaff
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, Box 275, New York, NY 10065, United States
| |
Collapse
|
24
|
Bohlen P, Dylla M, Timms C, Ramachandran R. Detection of modulated tones in modulated noise by non-human primates. J Assoc Res Otolaryngol 2014; 15:801-21. [PMID: 24899380 DOI: 10.1007/s10162-014-0467-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 05/08/2014] [Indexed: 10/25/2022] Open
Abstract
In natural environments, many sounds are amplitude-modulated. Amplitude modulation is thought to be a signal that aids auditory object formation. A previous study of the detection of signals in noise found that when tones or noise were amplitude-modulated, the noise was a less effective masker, and detection thresholds for tones in noise were lowered. These results suggest that the detection of modulated signals in modulated noise would be enhanced. This paper describes the results of experiments investigating how detection is modified when both signal and noise were amplitude-modulated. Two monkeys (Macaca mulatta) were trained to detect amplitude-modulated tones in continuous, amplitude-modulated broadband noise. When the phase difference of otherwise similarly amplitude-modulated tones and noise were varied, detection thresholds were highest when the modulations were in phase and lowest when the modulations were anti-phase. When the depth of the modulation of tones or noise was varied, detection thresholds decreased if the modulations were anti-phase. When the modulations were in phase, increasing the depth of tone modulation caused an increase in tone detection thresholds, but increasing depth of noise modulations did not affect tone detection thresholds. Changing the modulation frequency of tone or noise caused changes in threshold that saturated at modulation frequencies higher than 20 Hz; thresholds decreased when the tone and noise modulations were in phase and decreased when they were anti-phase. The relationship between reaction times and tone level were not modified by manipulations to the nature of temporal variations in the signal or noise. The changes in behavioral threshold were consistent with a model where the brain subtracted noise from signal. These results suggest that the parameters of the modulation of signals and maskers heavily influence detection in very predictable ways. These results are consistent with some results in humans and avians and form the baseline for neurophysiological studies of mechanisms of detection in noise.
Collapse
Affiliation(s)
- Peter Bohlen
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN, 37232, USA,
| | | | | | | |
Collapse
|
25
|
Malone BJ, Scott BH, Semple MN. Encoding frequency contrast in primate auditory cortex. J Neurophysiol 2014; 111:2244-63. [PMID: 24598525 DOI: 10.1152/jn.00878.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This "dynamic hyperacuity" suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing.
Collapse
Affiliation(s)
- Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland; and
| | - Malcolm N Semple
- Center for Neural Science, New York University, New York, New York
| |
Collapse
|
26
|
Adaptive temporal encoding leads to a background-insensitive cortical representation of speech. J Neurosci 2013; 33:5728-35. [PMID: 23536086 DOI: 10.1523/jneurosci.5297-12.2013] [Citation(s) in RCA: 208] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.
Collapse
|
27
|
Perez CA, Engineer CT, Jakkamsetti V, Carraway RS, Perry MS, Kilgard MP. Different timescales for the neural coding of consonant and vowel sounds. Cereb Cortex 2013; 23:670-83. [PMID: 22426334 PMCID: PMC3563339 DOI: 10.1093/cercor/bhs045] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders.
Collapse
Affiliation(s)
- Claudia A Perez
- Cognition and Neuroscience Program, School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX 75080, USA
| | | | | | | | | | | |
Collapse
|
28
|
Abstract
Auditory neurons are often described in terms of their spectrotemporal receptive fields (STRFs). These map the relationship between features of the sound spectrogram and firing rates of neurons. Recently, we showed that neurons in the primary fields of the ferret auditory cortex are also subject to gain control: when sounds undergo smaller fluctuations in their level over time, the neurons become more sensitive to small-level changes (Rabinowitz et al., 2011). Just as STRFs measure the spectrotemporal features of a sound that lead to changes in the firing rates of neurons, in this study, we sought to estimate the spectrotemporal regions in which sound statistics lead to changes in the gain of neurons. We designed a set of stimuli with complex contrast profiles to characterize these regions. This allowed us to estimate the STRFs of cortical neurons alongside a set of spectrotemporal contrast kernels. We find that these two sets of integration windows match up: the extent to which a stimulus feature causes the firing rate of a neuron to change is strongly correlated with the extent to which the contrast of that feature modulates the gain of the neuron. Adding contrast kernels to STRF models also yields considerable improvements in the ability to capture and predict how auditory cortical neurons respond to statistically complex sounds.
Collapse
|
29
|
Niwa M, Johnson JS, O'Connor KN, Sutter ML. Active engagement improves primary auditory cortical neurons' ability to discriminate temporal modulation. J Neurosci 2012; 32:9323-34. [PMID: 22764239 PMCID: PMC3410753 DOI: 10.1523/jneurosci.5832-11.2012] [Citation(s) in RCA: 79] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2011] [Revised: 05/07/2012] [Accepted: 05/12/2012] [Indexed: 11/21/2022] Open
Abstract
The effect of attention on single neuron responses in the auditory system is unresolved. We found that when monkeys discriminated temporally amplitude modulated (AM) from unmodulated sounds, primary auditory cortical (A1) neurons better discriminated those sounds than when the monkeys were not discriminating them. This was observed for both average firing rate and vector strength (VS), a measure of how well neurons temporally follow the stimulus' temporal modulation. When data were separated by nonsynchronized and synchronized responses, the firing rate of nonsynchronized responses best distinguished AM- noise from unmodulated noise, followed by VS for synchronized responses, with firing rate for synchronized neurons providing the poorest AM discrimination. Firing rate-based AM discrimination for synchronized neurons, however, improved most with task engagement, showing that the least sensitive code in the passive condition improves the most with task engagement. Rate coding improved due to larger increases in absolute firing rate at higher modulation depths than for lower depths and unmodulated sounds. Relative to spontaneous activity (which increased with engagement), the response to unmodulated sounds decreased substantially. The temporal coding improvement--responses more precisely temporally following a stimulus when animals were required to attend to it--expands the framework of possible mechanisms of attention to include increasing temporal precision of stimulus following. These findings provide a crucial step to understanding the coding of temporal modulation and support a model in which rate and temporal coding work in parallel, permitting a multiplexed code for temporal modulation, and for a complementary representation of rate and temporal coding.
Collapse
Affiliation(s)
- Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, California 95618
| | - Jeffrey S. Johnson
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, California 95618
| | - Kevin N. O'Connor
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L. Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
30
|
Alves-Pinto A, Sollini J, Sumner CJ. Signal detection in animal psychoacoustics: analysis and simulation of sensory and decision-related influences. Neuroscience 2012; 220:215-27. [PMID: 22698686 PMCID: PMC3422536 DOI: 10.1016/j.neuroscience.2012.06.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2011] [Revised: 06/01/2012] [Accepted: 06/01/2012] [Indexed: 11/12/2022]
Abstract
Signal detection theory (SDT) provides a framework for interpreting psychophysical experiments, separating the putative internal sensory representation and the decision process. SDT was used to analyse ferret behavioural responses in a (yes–no) tone-in-noise detection task. Instead of measuring the receiver-operating characteristic (ROC), we tested SDT by comparing responses collected using two common psychophysical data collection methods. These (Constant Stimuli, Limits) differ in the set of signal levels presented within and across behavioural sessions. The results support the use of SDT as a method of analysis: SDT sensory component was unchanged between the two methods, even though decisions depended on the stimuli presented within a behavioural session. Decision criterion varied trial-by-trial: a ‘yes’ response was more likely after a correct rejection trial than a hit trial. Simulation using an SDT model with several decision components reproduced the experimental observations accurately, leaving only ∼10% of the variance unaccounted for. The model also showed that trial-by-trial dependencies were unlikely to influence measured psychometric functions or thresholds. An additional model component suggested that inattention did not contribute substantially. Further analysis showed that ferrets were changing their decision criteria, almost optimally, to maximise the reward obtained in a session. The data suggest trial-by-trial reward-driven optimization of the decision process. Understanding the factors determining behavioural responses is important for correlating neural activity and behaviour. SDT provides a good account of animal psychoacoustics, and can be validated using standard psychophysical methods and computer simulations, without recourse to ROC measurements.
Collapse
Affiliation(s)
- A Alves-Pinto
- MRC Institute of Hearing Research, Science Road, University Park, Nottingham, NG7 2RD, United Kingdom.
| | | | | |
Collapse
|
31
|
Reavis KM, Rothholtz VS, Tang Q, Carroll JA, Djalilian H, Zeng FG. Temporary suppression of tinnitus by modulated sounds. J Assoc Res Otolaryngol 2012; 13:561-71. [PMID: 22526737 DOI: 10.1007/s10162-012-0331-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2011] [Accepted: 04/03/2012] [Indexed: 12/21/2022] Open
Abstract
Despite high prevalence of tinnitus and its impact on quality life, there is no cure for tinnitus at present. Here, we report an effective means to temporarily suppress tinnitus by amplitude- and frequency-modulated tones. We systematically explored the interaction between subjective tinnitus and 17 external sounds in 20 chronic tinnitus sufferers. The external sounds included traditionally used unmodulated stimuli such as pure tones and white noise and dynamically modulated stimuli known to produce sustained neural synchrony in the central auditory pathway. All external sounds were presented in a random order to all subjects and at a loudness level that was just below tinnitus loudness. We found some tinnitus suppression in terms of reduced loudness by at least one of the 17 stimuli in 90% of the subjects, with the greatest suppression by amplitude-modulated tones with carrier frequencies near the tinnitus pitch for tinnitus sufferers with relatively normal loudness growth. Our results suggest that, in addition to a traditional masking approach using unmodulated pure tones and white noise, modulated sounds should be used for tinnitus suppression because they may be more effective in reducing hyperactive neural activities associated with tinnitus. The long-term effects of the modulated sounds on tinnitus and the underlying mechanisms remain to be investigated.
Collapse
Affiliation(s)
- Kelly M Reavis
- Center for Hearing Research Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, 110 Medical Science E, Irvine, CA 92697-5320, USA
| | | | | | | | | | | |
Collapse
|
32
|
Johnson JS, Yin P, O'Connor KN, Sutter ML. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis. J Neurophysiol 2012; 107:3325-41. [PMID: 22422997 DOI: 10.1152/jn.00812.2011] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, Univ. of California at Davis, Davis, CA 95618, USA
| | | | | | | |
Collapse
|
33
|
Sarro EC, Rosen MJ, Sanes DH. Taking advantage of behavioral changes during development and training to assess sensory coding mechanisms. Ann N Y Acad Sci 2011; 1225:142-54. [PMID: 21535001 DOI: 10.1111/j.1749-6632.2011.06023.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The relationship between behavioral and neural performance has been explored in adult animals, but rarely during the developmental period when perceptual abilities emerge. We used these naturally occurring changes in auditory perception to evaluate underlying encoding mechanisms. Performance of juvenile and adult gerbils on an amplitude modulation (AM) detection task was compared with response properties from auditory cortex of age-matched animals. When tested with an identical behavioral procedure, juveniles display poorer AM detection thresholds than adults. Two neurometric analyses indicate that the most sensitive juvenile and adult neurons have equivalent AM thresholds. However, a pooling neurometric revealed that adult cortex encodes smaller AM depths. By each measure, neural sensitivity was superior to psychometric thresholds. However, juvenile training improved adult behavioral thresholds, such that they verged on the best sensitivity of adult neurons. Thus, periods of training may allow an animal to use the encoded information already present in cortex.
Collapse
Affiliation(s)
- Emma C Sarro
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
34
|
Abstract
Auditory signals are decomposed into discrete frequency elements early in the transduction process, yet somehow these signals are recombined into the rich acoustic percepts that we readily identify and are familiar with. The cerebral cortex is necessary for the perception of these signals, and studies from several laboratories over the past decade have made significant advances in our understanding of the neuronal mechanisms underlying auditory perception. This review will concentrate on recent studies in the macaque monkey that indicate that the activity of populations of neurons better accounts for the perceptual abilities compared to the activity of single neurons. The best examples address whether the acoustic space is represented along the "where" pathway in the caudal regions of auditory cortex. Our current understanding of how such population activity could also underlie the perception of the nonspatial features of acoustic stimuli is reviewed, as is how multisensory interactions can influence our auditory perception.
Collapse
Affiliation(s)
- Gregg H Recanzone
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
35
|
Abstract
Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.
Collapse
|
36
|
Quinkert AW, Schiff ND, Pfaff DW. Temporal patterning of pulses during deep brain stimulation affects central nervous system arousal. Behav Brain Res 2010; 214:377-85. [PMID: 20558210 DOI: 10.1016/j.bbr.2010.06.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 06/05/2010] [Indexed: 11/29/2022]
Abstract
Regulation of CNS arousal is important for a wide variety of functions, including the initiation of all motivated behaviors. Usually studied with pharmacological or hormonal tools, CNS arousal can also be elevated by deep brain stimulation (DBS), in the human brain and in animals. The effectiveness of DBS is conventionally held to depend on pulse width, frequency, amplitude and stimulation duration. We demonstrate a novel approach for testing the effectiveness of DBS to increase arousal in intact female mice: all of the foregoing parameters are held constant. Only the temporal patterning of the pulses within the stimulation is varied. To create differentially patterned pulse trains, a deterministic nonlinear dynamic equation was used to generate a series of pulses with a predetermined average frequency. Three temporal patterns of stimulation were defined: two nonlinear patterns, Nonlinear1 (NL1) and Nonlinear2 (NL2), and the conventional pattern, Fixed Frequency (FF). Female mice with bilateral monopolar electrodes were observed before, during and after hippocampal or medial thalamic stimulation. NL1 hippocampal stimulation was significantly more effective at increasing behavioral arousal than either FF or NL2; however, FF and NL2 stimulation of the medial thalamus were more effective than NL1. During the same experiments, we recorded an unpredicted increase in the spectral power of slow waves in the cortical EEG. Our data comprise the first demonstration that the temporal pattern of DBS can be used to elevate its effectiveness, and also point the way toward the use of nonlinear dynamics in the exploration of means to optimize DBS.
Collapse
Affiliation(s)
- Amy Wells Quinkert
- Laboratory of Neurobiology and Behavior, Rockefeller University, 1230 York Ave, New York, NY 10065, United States.
| | | | | |
Collapse
|
37
|
Abstract
During development, detection for many percepts matures gradually. This provides a natural system in which to investigate the neural mechanisms underlying performance differences: those aspects of neural activity that mature in conjunction with behavioral performance are more likely to subserve detection. In principle, the limitations on performance could be attributable to either immature sensory encoding mechanisms or an immature decoding of an already-mature sensory representation. To evaluate these alternatives in awake gerbil auditory cortex, we measured neural detection of sinusoidally amplitude-modulated (sAM) stimuli, for which behavioral detection thresholds display a prolonged maturation. A comparison of single-unit responses in juveniles and adults revealed that encoding of static tones was adult like in juveniles, but responses to sAM depth were immature. Since perceptual performance may reflect the activity of an animal's most sensitive neurons, we analyzed the d prime curves of single neurons and found an equivalent percentage with highly sensitive thresholds in juvenile and adult animals. In contrast, perceptual performance may reflect the pooling of information from neurons with a range of sensitivities. We evaluated a pooling model that assumes convergence of a population of inputs at a downstream target neuron and found poorer sAM detection thresholds for juveniles. Thus, if sAM detection is based on the most sensitive neurons, then immature behavioral performance is best explained by an immature decoding mechanism. However, if sAM detection is based on a population response, then immature detection thresholds are more likely caused by an inadequate sensory representation.
Collapse
|
38
|
Scott BH, Malone BJ, Semple MN. Transformation of temporal processing across auditory cortex of awake macaques. J Neurophysiol 2010; 105:712-30. [PMID: 21106896 DOI: 10.1152/jn.01120.2009] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The anatomy and connectivity of the primate auditory cortex has been modeled as a core region receiving direct thalamic input surrounded by a belt of secondary fields. The core contains multiple tonotopic fields (including the primary auditory cortex, AI, and the rostral field, R), but available data only partially address the degree to which those fields are functionally distinct. This report, based on single-unit recordings across four hemispheres in awake macaques, argues that the functional organization of auditory cortex is best understood in terms of temporal processing. Frequency tuning, response threshold, and strength of activation are similar between AI and R, validating their inclusion as a unified core, but the temporal properties of the fields clearly differ. Onset latencies to pure tones are longer in R (median, 33 ms) than in AI (20 ms); moreover, synchronization of spike discharges to dynamic modulations of stimulus amplitude and frequency, similar to those present in macaque and human vocalizations, suggest distinctly different windows of temporal integration in AI (20-30 ms) and R (100 ms). Incorporating data from the adjacent auditory belt reveals that the divergence of temporal properties within the core is in some cases greater than the temporal differences between core and belt.
Collapse
Affiliation(s)
- Brian H Scott
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
39
|
Wohlgemuth S, Vogel A, Ronacher B. Encoding of amplitude modulations by auditory neurons of the locust: influence of modulation frequency, rise time, and modulation depth. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2010; 197:61-74. [PMID: 20865417 PMCID: PMC3016238 DOI: 10.1007/s00359-010-0587-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2010] [Revised: 09/02/2010] [Accepted: 09/06/2010] [Indexed: 11/24/2022]
Abstract
Using modulation transfer functions (MTF), we investigated how sound patterns are processed within the auditory pathway of grasshoppers. Spike rates of auditory receptors and primary-like local neurons did not depend on modulation frequencies while other local and ascending neurons had lowpass, bandpass or bandstop properties. Local neurons exhibited broader dynamic ranges of their rate MTF that extended to higher modulation frequencies than those of most ascending neurons. We found no indication that a filter bank for modulation frequencies may exist in grasshoppers as has been proposed for the auditory system of mammals. The filter properties of half of the neurons changed to an allpass type with a 50% reduction of modulation depths. Contrasting to reports for mammals, the sensitivity to small modulation depths was not enhanced at higher processing stages. In ascending neurons, a focus on the range of low modulation frequencies was visible in the temporal MTFs, which describe the temporal locking of spikes to the signal envelope. To investigate the influence of stimulus rise time, we used rectangularly modulated stimuli instead of sinusoidally modulated ones. Unexpectedly, steep stimulus onsets had only small influence on the shape of MTF curves of 70% of neurons in our sample.
Collapse
Affiliation(s)
- Sandra Wohlgemuth
- Department of Biology, Humboldt-Universität zu Berlin, Invalidenstrasse 43, 10115 Berlin, Germany
- Present Address: Department of Animal Behaviour, Institute of Biology, Freie Universität, Berlin, Germany
| | - Astrid Vogel
- Department of Biology, Humboldt-Universität zu Berlin, Invalidenstrasse 43, 10115 Berlin, Germany
| | - Bernhard Ronacher
- Department of Biology, Humboldt-Universität zu Berlin, Invalidenstrasse 43, 10115 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Unter den Linden 6, 10099 Berlin, Germany
| |
Collapse
|