1
|
Yue L, Bao C, Zhang L, Zhang F, Zhou W, Iannetti GD, Hu L. Neuronal mechanisms of nociceptive-evoked gamma-band oscillations in rodents. Neuron 2025; 113:769-784.e6. [PMID: 39809278 DOI: 10.1016/j.neuron.2024.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 08/02/2024] [Accepted: 12/12/2024] [Indexed: 01/16/2025]
Abstract
Gamma-band oscillations (GBOs) in the primary somatosensory cortex (S1) play key roles in nociceptive processing. Yet, one crucial question remains unaddressed: what neuronal mechanisms underlie nociceptive-evoked GBOs? Here, we addressed this question using a range of somatosensory stimuli (nociceptive and non-nociceptive), neural recording techniques (electroencephalography in humans and silicon probes and calcium imaging in rodents), and optogenetics (alone or simultaneously with electrophysiology in mice). We found that (1) GBOs encoded pain intensity independent of stimulus intensity in humans, (2) GBOs in S1 encoded pain intensity and were triggered by spiking of S1 interneurons, (3) parvalbumin (PV)-positive interneurons preferentially tracked pain intensity, and critically, (4) PV S1 interneurons causally modulated GBOs and pain-related behaviors for both thermal and mechanical pain. These findings provide causal evidence that nociceptive-evoked GBOs preferentially encoding pain intensity are generated by PV interneurons in S1, thereby laying a solid foundation for developing GBO-based targeted pain therapies.
Collapse
Affiliation(s)
- Lupeng Yue
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Chongyu Bao
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Libo Zhang
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Fengrui Zhang
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Wenqian Zhou
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Gian Domenico Iannetti
- Neuroscience and Behaviour Laboratory, Italian Institute of Technology, Rome, Italy; Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - Li Hu
- State Key Laboratory of Cognitive Science and Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
2
|
Chen C, Song S. Distinct Neuron Types Contribute to Hybrid Auditory Spatial Coding. J Neurosci 2024; 44:e0159242024. [PMID: 39261006 PMCID: PMC11502229 DOI: 10.1523/jneurosci.0159-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 07/20/2024] [Accepted: 07/28/2024] [Indexed: 09/13/2024] Open
Abstract
Neural decoding is a tool for understanding how activities from a population of neurons inside the brain relate to the outside world and for engineering applications such as brain-machine interfaces. However, neural decoding studies mainly focused on different decoding algorithms rather than different neuron types which could use different coding strategies. In this study, we used two-photon calcium imaging to assess three auditory spatial decoders (space map, opponent channel, and population pattern) in excitatory and inhibitory neurons in the dorsal inferior colliculus of male and female mice. Our findings revealed a clustering of excitatory neurons that prefer similar interaural level difference (ILD), the primary spatial cues in mice, while inhibitory neurons showed random local ILD organization. We found that inhibitory neurons displayed lower decoding variability under the opponent channel decoder, while excitatory neurons achieved higher decoding accuracy under the space map and population pattern decoders. Further analysis revealed that the inhibitory neurons' preference for ILD off the midline and the excitatory neurons' heterogeneous ILD tuning account for their decoding differences. Additionally, we discovered a sharper ILD tuning in the inhibitory neurons. Our computational model, linking this to increased presynaptic inhibitory inputs, was corroborated using monaural and binaural stimuli. Overall, this study provides experimental and computational insight into how excitatory and inhibitory neurons uniquely contribute to the coding of sound locations.
Collapse
Affiliation(s)
- Chenggang Chen
- Tsinghua Laboratory of Brain and Intelligence and School of Biomedical Engineering, McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Sen Song
- Tsinghua Laboratory of Brain and Intelligence and School of Biomedical Engineering, McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| |
Collapse
|
3
|
Liu S, Leung VCH, Dragotti PL. First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures. Front Neurosci 2023; 17:1266003. [PMID: 37849889 PMCID: PMC10577212 DOI: 10.3389/fnins.2023.1266003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Spiking neural networks (SNNs) are well-suited to process asynchronous event-based data. Most of the existing SNNs use rate-coding schemes that focus on firing rate (FR), and so they generally ignore the spike timing in events. On the contrary, methods based on temporal coding, particularly time-to-first-spike (TTFS) coding, can be accurate and efficient but they are difficult to train. Currently, there is limited research on applying TTFS coding to real events, since traditional TTFS-based methods impose one-spike constraint, which is not realistic for event-based data. In this study, we present a novel decision-making strategy based on first-spike (FS) coding that encodes FS timings of the output neurons to investigate the role of the first-spike timing in classifying real-world event sequences with complex temporal structures. To achieve FS coding, we propose a novel surrogate gradient learning method for discrete spike trains. In the forward pass, output spikes are encoded into discrete times to generate FS times. In the backpropagation, we develop an error assignment method that propagates error from FS times to spikes through a Gaussian window, and then supervised learning for spikes is implemented through a surrogate gradient approach. Additional strategies are introduced to facilitate the training of FS timings, such as adding empty sequences and employing different parameters for different layers. We make a comprehensive comparison between FS and FR coding in the experiments. Our results show that FS coding achieves comparable accuracy to FR coding while leading to superior energy efficiency and distinct neuronal dynamics on data sequences with very rich temporal structures. Additionally, a longer time delay in the first spike leads to higher accuracy, indicating important information is encoded in the timing of the first spike.
Collapse
Affiliation(s)
- Siying Liu
- Communications and Signal Processing Group, Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | | | |
Collapse
|
4
|
Chen C, Remington ED, Wang X. Sound localization acuity of the common marmoset (Callithrix jacchus). Hear Res 2023; 430:108722. [PMID: 36863289 DOI: 10.1016/j.heares.2023.108722] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 02/03/2023] [Accepted: 02/10/2023] [Indexed: 02/14/2023]
Abstract
The common marmoset (Callithrix jacchus) is a small arboreal New World primate which has emerged as a promising model in auditory neuroscience. One potentially useful application of this model system is in the study of the neural mechanism underlying spatial hearing in primate species, as the marmosets need to localize sounds to orient their head to events of interest and identify their vocalizing conspecifics that are not visible. However, interpretation of neurophysiological data on sound localization requires an understanding of perceptual abilities, and the sound localization behavior of marmosets has not been well studied. The present experiment measured sound localization acuity using an operant conditioning procedure in which marmosets were trained to discriminate changes in sound location in the horizontal (azimuth) or vertical (elevation) dimension. Our results showed that the minimum audible angle (MAA) for horizontal and vertical discrimination was 13.17° and 12.53°, respectively, for 2 to 32 kHz Gaussian noise. Removing the monaural spectral cues tended to increase the horizontal localization acuity (11.31°). Marmosets have larger horizontal MAA (15.54°) in the rear than the front. Removing the high-frequency (> 26 kHz) region of the head-related transfer function (HRTF) affected vertical acuity mildly (15.76°), but removing the first notch (12-26 kHz) region of HRTF substantially reduced the vertical acuity (89.01°). In summary, our findings indicate that marmosets' spatial acuity is on par with other species of similar head size and field of best vision, and they do not appear to use monaural spectral cues for horizontal discrimination but rely heavily on first notch region of HRTF for vertical discrimination.
Collapse
Affiliation(s)
- Chenggang Chen
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Ave., Traylor 410, Baltimore, MD 21025, United States
| | - Evan D Remington
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Ave., Traylor 410, Baltimore, MD 21025, United States
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Ave., Traylor 410, Baltimore, MD 21025, United States.
| |
Collapse
|
5
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
6
|
A Redundant Cortical Code for Speech Envelope. J Neurosci 2023; 43:93-112. [PMID: 36379706 PMCID: PMC9838705 DOI: 10.1523/jneurosci.1616-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 08/19/2022] [Accepted: 10/23/2022] [Indexed: 11/17/2022] Open
Abstract
Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope modulations drive synchronized activity in auditory cortex (AC), which correlates strongly with comprehension (Giraud and Poeppel, 2012; Peelle and Davis, 2012; Haegens and Zion Golumbic, 2018). Studies of envelope coding in single neurons, performed in nonhuman animals, have focused on periodic amplitude modulation (AM) stimuli and use response metrics that are not easy to juxtapose with data from humans. In this study, we sought to bridge these fields. Specifically, we looked directly at the temporal relationship between stimulus envelope and spiking, and we assessed whether the apparent diversity across neurons' AM responses contributes to the population representation of speech-like sound envelopes. We gathered responses from single neurons to vocoded speech stimuli and compared them to sinusoidal AM responses in auditory cortex (AC) of alert, freely moving Mongolian gerbils of both sexes. While AC neurons displayed heterogeneous tuning to AM rate, their temporal dynamics were stereotyped. Preferred response phases accumulated near the onsets of sinusoidal AM periods for slower rates (<8 Hz), and an over-representation of amplitude edges was apparent in population responses to both sinusoidal AM and vocoded speech envelopes. Crucially, this encoding bias imparted a decoding benefit: a classifier could discriminate vocoded speech stimuli using summed population activity, while higher frequency modulations required a more sophisticated decoder that tracked spiking responses from individual cells. Together, our results imply that the envelope structure relevant to parsing an acoustic stream could be read-out from a distributed, redundant population code.SIGNIFICANCE STATEMENT Animal communication sounds have rich temporal structure and are often produced in extended sequences, including the syllabic structure of human speech. Although the auditory cortex (AC) is known to play a crucial role in representing speech syllables, the contribution of individual neurons remains uncertain. Here, we characterized the representations of both simple, amplitude-modulated sounds and complex, speech-like stimuli within a broad population of cortical neurons, and we found an overrepresentation of amplitude edges. Thus, a phasic, redundant code in auditory cortex can provide a mechanistic explanation for segmenting acoustic streams like human speech.
Collapse
|