1
|
Moseley SM, Meliza CD. A Complex Acoustical Environment During Development Enhances Auditory Perception and Coding Efficiency in the Zebra Finch. J Neurosci 2025; 45:e1269242024. [PMID: 39730206 PMCID: PMC11823350 DOI: 10.1523/jneurosci.1269-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Revised: 12/04/2024] [Accepted: 12/05/2024] [Indexed: 12/29/2024] Open
Abstract
Sensory experience during development has lasting effects on perception and neural processing. Exposing juvenile animals to artificial stimuli influences the tuning and functional organization of the auditory cortex, but less is known about how the rich acoustical environments experienced by vocal communicators affect the processing of complex vocalizations. Here, we show that in zebra finches (Taeniopygia guttata), a colonial-breeding songbird species, exposure to a naturalistic social-acoustical environment during development has a profound impact on auditory perceptual behavior and on cortical-level auditory responses to conspecific song. Compared to birds raised by pairs in acoustic isolation, male and female birds raised in a breeding colony were better in an operant discrimination task at recognizing conspecific songs with and without masking colony noise. Neurons in colony-reared birds had higher average firing rates, selectivity, and discriminability, especially in the narrow-spiking, putatively inhibitory neurons of a higher-order auditory area, the caudomedial nidopallium (NCM). Neurons in colony-reared birds were also less correlated in their tuning, more efficient at encoding the spectrotemporal structure of conspecific song, and better at filtering out masking noise. These results suggest that the auditory cortex adapts to noisy, complex acoustical environments by strengthening inhibitory circuitry, functionally decoupling excitatory neurons while maintaining overall excitatory-inhibitory balance.
Collapse
Affiliation(s)
- Samantha M Moseley
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
| | - C Daniel Meliza
- Department of Psychology, University of Virginia, Charlottesville, Virginia 22904
- Neuroscience Graduate Program, University of Virginia, Charlottesville, Virginia 22904
| |
Collapse
|
2
|
Moseley SM, Meliza CD. A complex acoustical environment during development enhances auditory perception and coding efficiency in the zebra finch. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.25.600670. [PMID: 38979160 PMCID: PMC11230381 DOI: 10.1101/2024.06.25.600670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Sensory experience during development has lasting effects on perception and neural processing. Exposing juvenile animals to artificial stimuli influences the tuning and functional organization of the auditory cortex, but less is known about how the rich acoustical environments experienced by vocal communicators affect the processing of complex vocalizations. Here, we show that in zebra finches (Taeniopygia guttata), a colonial-breeding songbird species, exposure to a naturalistic social-acoustical environment during development has a profound impact on auditory perceptual behavior and on cortical-level auditory responses to conspecific song. Compared to birds raised by pairs in acoustic isolation, male and female birds raised in a breeding colony were better in an operant discrimination task at recognizing conspecific songs with and without masking colony noise. Neurons in colony-reared birds had higher average firing rates, selectivity, and discriminability, especially in the narrow-spiking, putatively inhibitory neurons of a higher-order auditory area, the caudomedial nidopallium (NCM). Neurons in colony-reared birds were also less correlated in their tuning and more efficient at encoding the spectrotemporal structure of conspecific song, and better at filtering out masking noise. These results suggest that the auditory cortex adapts to noisy, complex acoustical environments by strengthening inhibitory circuitry, functionally decoupling excitatory neurons while maintaining overall excitatory-inhibitory balance.
Collapse
Affiliation(s)
- Samantha M Moseley
- Department of Psychology, University of Virginia, Charlottesville VA 22904, USA
| | - C Daniel Meliza
- Department of Psychology, University of Virginia, Charlottesville VA 22904, USA
- Neuroscience Graduate Program, University of Virginia, Charlottesville VA 22904, USA
| |
Collapse
|
3
|
van den Berg MM, Busscher E, Borst JGG, Wong AB. Neuronal responses in mouse inferior colliculus correlate with behavioral detection of amplitude-modulated sound. J Neurophysiol 2023; 130:524-546. [PMID: 37465872 DOI: 10.1152/jn.00048.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, including speech and animal vocalizations. Here, we used operant conditioning and in vivo electrophysiology to determine the AM detection threshold of mice as well as its underlying neuronal encoding. Mice were trained in a Go-NoGo task to detect the transition to AM within a noise stimulus designed to prevent the use of spectral side-bands or a change in intensity as alternative cues. Our results indicate that mice, compared with other species, detect high modulation frequencies up to 512 Hz well, but show much poorer performance at low frequencies. Our in vivo multielectrode recordings in the inferior colliculus (IC) of both anesthetized and awake mice revealed a few single units with remarkable phase-locking ability to 512 Hz modulation, but not sufficient to explain the good behavioral detection at that frequency. Using a model of the population response that combined dimensionality reduction with threshold detection, we reproduced the general band-pass characteristics of behavioral detection based on a subset of neurons showing the largest firing rate change (both increase and decrease) in response to AM, suggesting that these neurons are instrumental in the behavioral detection of AM stimuli by the mice.NEW & NOTEWORTHY The amplitude of natural sounds, including speech and animal vocalizations, often shows characteristic modulations. We examined the relationship between neuronal responses in the mouse inferior colliculus and the behavioral detection of amplitude modulation (AM) in sound and modeled how the former can give rise to the latter. Our model suggests that behavioral detection can be well explained by the activity of a subset of neurons showing the largest firing rate changes in response to AM.
Collapse
Affiliation(s)
- Maurits M van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Esmée Busscher
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - J Gerard G Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Aaron B Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
4
|
Nocon JC, Witter J, Gritton H, Han X, Houghton C, Sen K. A robust and compact population code for competing sounds in auditory cortex. J Neurophysiol 2023; 130:775-787. [PMID: 37646080 PMCID: PMC10642980 DOI: 10.1152/jn.00148.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Cortical circuits encoding sensory information consist of populations of neurons, yet how information aggregates via pooling individual cells remains poorly understood. Such pooling may be particularly important in noisy settings where single-neuron encoding is degraded. One example is the cocktail party problem, with competing sounds from multiple spatial locations. How populations of neurons in auditory cortex code competing sounds have not been previously investigated. Here, we apply a novel information-theoretic approach to estimate information in populations of neurons in mouse auditory cortex about competing sounds from multiple spatial locations, including both summed population (SP) and labeled line (LL) codes. We find that a small subset of neurons is sufficient to nearly maximize mutual information over different spatial configurations, with the labeled line code outperforming the summed population code and approaching information levels attained in the absence of competing stimuli. Finally, information in the labeled line code increases with spatial separation between target and masker, in correspondence with behavioral results on spatial release from masking in humans and animals. Taken together, our results reveal that a compact population of neurons in auditory cortex provides a robust code for competing sounds from different spatial locations.NEW & NOTEWORTHY Little is known about how populations of neurons within cortical circuits encode sensory stimuli in the presence of competing stimuli at other spatial locations. Here, we investigate this problem in auditory cortex using a recently proposed information-theoretic approach. We find a small subset of neurons nearly maximizes information about target sounds in the presence of competing maskers, approaching information levels for isolated stimuli, and provides a noise-robust code for sounds in a complex auditory scene.
Collapse
Affiliation(s)
- Jian Carlo Nocon
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Jake Witter
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Howard Gritton
- Department of Comparative Biosciences, University of Illinois, Urbana, Illinois, United States
- Department of Bioengineering, University of Illinois, Urbana, Illinois, United States
| | - Xue Han
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| | - Conor Houghton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Kamal Sen
- Neurophotonics Center, Boston University, Boston, Massachusetts, United States
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, United States
- Hearing Research Center, Boston University, Boston, Massachusetts, United States
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States
| |
Collapse
|
5
|
Nocon JC, Gritton HJ, James NM, Mount RA, Qu Z, Han X, Sen K. Parvalbumin neurons enhance temporal coding and reduce cortical noise in complex auditory scenes. Commun Biol 2023; 6:751. [PMID: 37468561 PMCID: PMC10356822 DOI: 10.1038/s42003-023-05126-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Accepted: 07/10/2023] [Indexed: 07/21/2023] Open
Abstract
Cortical representations supporting many cognitive abilities emerge from underlying circuits comprised of several different cell types. However, cell type-specific contributions to rate and timing-based cortical coding are not well-understood. Here, we investigated the role of parvalbumin neurons in cortical complex scene analysis. Many complex scenes contain sensory stimuli which are highly dynamic in time and compete with stimuli at other spatial locations. Parvalbumin neurons play a fundamental role in balancing excitation and inhibition in cortex and sculpting cortical temporal dynamics; yet their specific role in encoding complex scenes via timing-based coding, and the robustness of temporal representations to spatial competition, has not been investigated. Here, we address these questions in auditory cortex of mice using a cocktail party-like paradigm, integrating electrophysiology, optogenetic manipulations, and a family of spike-distance metrics, to dissect parvalbumin neurons' contributions towards rate and timing-based coding. We find that suppressing parvalbumin neurons degrades cortical discrimination of dynamic sounds in a cocktail party-like setting via changes in rapid temporal modulations in rate and spike timing, and over a wide range of time-scales. Our findings suggest that parvalbumin neurons play a critical role in enhancing cortical temporal coding and reducing cortical noise, thereby improving representations of dynamic stimuli in complex scenes.
Collapse
Affiliation(s)
- Jian Carlo Nocon
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Howard J Gritton
- Department of Comparative Biosciences, University of Illinois, Urbana, 61820, IL, USA
- Department of Bioengineering, University of Illinois, Urbana, 61820, IL, USA
| | - Nicholas M James
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Rebecca A Mount
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Zhili Qu
- Department of Comparative Biosciences, University of Illinois, Urbana, 61820, IL, USA
- Department of Bioengineering, University of Illinois, Urbana, 61820, IL, USA
| | - Xue Han
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA
- Hearing Research Center, Boston University, Boston, 02215, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA
| | - Kamal Sen
- Neurophotonics Center, Boston University, Boston, 02215, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, 02215, MA, USA.
- Hearing Research Center, Boston University, Boston, 02215, MA, USA.
- Department of Biomedical Engineering, Boston University, Boston, 02215, MA, USA.
| |
Collapse
|
6
|
Robotka H, Thomas L, Yu K, Wood W, Elie JE, Gahr M, Theunissen FE. Sparse ensemble neural code for a complete vocal repertoire. Cell Rep 2023; 42:112034. [PMID: 36696266 PMCID: PMC10363576 DOI: 10.1016/j.celrep.2023.112034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 08/08/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The categorization of animal vocalizations into distinct behaviorally relevant groups for communication is an essential operation that must be performed by the auditory system. This auditory object recognition is a difficult task that requires selectivity to the group identifying acoustic features and invariance to renditions within each group. We find that small ensembles of auditory neurons in the forebrain of a social songbird can code the bird's entire vocal repertoire (∼10 call types). Ensemble neural discrimination is not, however, correlated with single unit selectivity, but instead with how well the joint single unit tunings to characteristic spectro-temporal modulations span the acoustic subspace optimized for the discrimination of call types. Thus, akin to face recognition in the visual system, call type recognition in the auditory system is based on a sparse code representing a small number of high-level features and not on highly selective grandmother neurons.
Collapse
Affiliation(s)
- H Robotka
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - L Thomas
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - K Yu
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - W Wood
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - J E Elie
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - M Gahr
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - F E Theunissen
- Max Planck Institute for Ornithology, Seewiesen, Germany; University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA; Department of Psychology and Integrative Biology, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
7
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
8
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
9
|
Gupta P, Balasubramaniam N, Chang HY, Tseng FG, Santra TS. A Single-Neuron: Current Trends and Future Prospects. Cells 2020; 9:E1528. [PMID: 32585883 PMCID: PMC7349798 DOI: 10.3390/cells9061528] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 06/15/2020] [Accepted: 06/19/2020] [Indexed: 12/11/2022] Open
Abstract
The brain is an intricate network with complex organizational principles facilitating a concerted communication between single-neurons, distinct neuron populations, and remote brain areas. The communication, technically referred to as connectivity, between single-neurons, is the center of many investigations aimed at elucidating pathophysiology, anatomical differences, and structural and functional features. In comparison with bulk analysis, single-neuron analysis can provide precise information about neurons or even sub-neuron level electrophysiology, anatomical differences, pathophysiology, structural and functional features, in addition to their communications with other neurons, and can promote essential information to understand the brain and its activity. This review highlights various single-neuron models and their behaviors, followed by different analysis methods. Again, to elucidate cellular dynamics in terms of electrophysiology at the single-neuron level, we emphasize in detail the role of single-neuron mapping and electrophysiological recording. We also elaborate on the recent development of single-neuron isolation, manipulation, and therapeutic progress using advanced micro/nanofluidic devices, as well as microinjection, electroporation, microelectrode array, optical transfection, optogenetic techniques. Further, the development in the field of artificial intelligence in relation to single-neurons is highlighted. The review concludes with between limitations and future prospects of single-neuron analyses.
Collapse
Affiliation(s)
- Pallavi Gupta
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| | - Nandhini Balasubramaniam
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| | - Hwan-You Chang
- Department of Medical Science, National Tsing Hua University, Hsinchu 30013, Taiwan;
| | - Fan-Gang Tseng
- Department of Engineering and System Science, National Tsing Hua University, Hsinchu 30013, Taiwan;
| | - Tuhin Subhra Santra
- Department of Engineering Design, Indian Institute of Technology Madras, Tamil Nadu 600036, India; (P.G.); (N.B.)
| |
Collapse
|
10
|
Sihn D, Kim SP. A Spike Train Distance Robust to Firing Rate Changes Based on the Earth Mover's Distance. Front Comput Neurosci 2020; 13:82. [PMID: 31920607 PMCID: PMC6914768 DOI: 10.3389/fncom.2019.00082] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 11/25/2019] [Indexed: 11/25/2022] Open
Abstract
Neural spike train analysis methods are mainly used for understanding the temporal aspects of neural information processing. One approach is to measure the dissimilarity between the spike trains of a pair of neurons, often referred to as the spike train distance. The spike train distance has been often used to classify neuronal units with similar temporal patterns. Several methods to compute spike train distance have been developed so far. Intuitively, a desirable distance should be the shortest length between two objects. The Earth Mover’s Distance (EMD) can compute spike train distance by measuring the shortest length between two spike trains via shifting a fraction of spikes from one spike train to another. The EMD could accurately measure spike timing differences, temporal similarity, and spikes time synchrony. It is also robust to firing rate changes. Victor and Purpura (1996) distance measures the minimum cost between two spike trains. Although it also measures the shortest path between spike trains, its output can vary with the time-scale parameter. In contrast, the EMD measures distance in a unique way by calculating the genuine shortest length between spike trains. The EMD also outperforms other existing spike train distance methods in measuring various aspects of the temporal characteristics of spike trains and in robustness to firing rate changes. The EMD can effectively measure the shortest length between spike trains without being considerably affected by the overall firing rate difference between them. Hence, it is suitable for pure temporal coding exclusively, which is a predominant premise underlying the present study.
Collapse
Affiliation(s)
- Duho Sihn
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea
| | - Sung-Phil Kim
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea
| |
Collapse
|
11
|
The Neuroethology of Vocal Communication in Songbirds: Production and Perception of a Call Repertoire. THE NEUROETHOLOGY OF BIRDSONG 2020. [DOI: 10.1007/978-3-030-34683-6_7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
12
|
Yi Z, Zhang Y. A spike train distance-based method to evaluate the response of mechanoreceptive afferents. Neural Comput Appl 2019. [DOI: 10.1007/s00521-018-3465-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
Satuvuori E, Mulansky M, Daffertshofer A, Kreuz T. Using spike train distances to identify the most discriminative neuronal subpopulation. J Neurosci Methods 2018; 308:354-365. [PMID: 30213547 DOI: 10.1016/j.jneumeth.2018.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 08/22/2018] [Accepted: 09/04/2018] [Indexed: 10/28/2022]
Abstract
BACKGROUND Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses. NEW METHOD For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best. RESULTS The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease. COMPARISON WITH EXISTING METHODS Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary. CONCLUSIONS The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.
Collapse
Affiliation(s)
- Eero Satuvuori
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy; Amsterdam Movement Sciences (AMS) & Institute for Brain and Behaviour Amsterdam (iBBA), Faculty of Behavioural and Movement Sciences, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.
| | - Mario Mulansky
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| | - Andreas Daffertshofer
- Amsterdam Movement Sciences (AMS) & Institute for Brain and Behaviour Amsterdam (iBBA), Faculty of Behavioural and Movement Sciences, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.
| | - Thomas Kreuz
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| |
Collapse
|
14
|
Mattingly MM, Donell BM, Rosen MJ. Late maturation of backward masking in auditory cortex. J Neurophysiol 2018; 120:1558-1571. [PMID: 29995598 DOI: 10.1152/jn.00114.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Speech perception relies on the accurate resolution of brief, successive sounds that change rapidly over time. Deficits in the perception of such sounds, indicated by a reduced ability to detect signals during auditory backward masking, strongly relate to language processing difficulties in children. Backward masking during normal development has a longer maturational trajectory than many other auditory percepts, implicating the involvement of central auditory neural mechanisms with protracted developmental time courses. Despite the importance of this percept, its neural correlates are not well described at any developmental stage. We therefore measured auditory cortical responses to masked signals in juvenile and adult Mongolian gerbils and quantified the detection ability of individual neurons and neural populations in a manner comparable with psychoacoustic measurements. Perceptually, auditory backward masking manifests as higher thresholds for detection of a short signal followed by a masker than for the same signal in silence. Cortical masking was driven by a combination of suppressed responses to the signal and a reduced dynamic range available for signal detection in the presence of the masker. Both coding elements contributed to greater masked threshold shifts in juveniles compared with adults, but signal-evoked firing suppression was more pronounced in juveniles. Neural threshold shifts were a better match to human psychophysical threshold shifts when quantified with a longer temporal window that included the response to the delayed masker, suggesting that temporally selective listening may contribute to age-related differences in backward masking. NEW & NOTEWORTHY In children, auditory detection of backward masked signals is immature well into adolescence, and detection deficits correlate with problems in speech processing. Our auditory cortical recordings reveal immature backward masking in adolescent animals that mirrors the prolonged development seen in children. This is driven by both signal-evoked suppression and dynamic range reduction. An extended window of analysis suggests that differences in temporally focused listening may contribute to late maturing thresholds for backward masked signals.
Collapse
Affiliation(s)
- Michelle M Mattingly
- Department of Anatomy & Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio
| | - Brittany M Donell
- Department of Anatomy & Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio
| | - Merri J Rosen
- Department of Anatomy & Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio
| |
Collapse
|
15
|
Viriyopase A, Memmesheimer RM, Gielen S. Analyzing the competition of gamma rhythms with delayed pulse-coupled oscillators in phase representation. Phys Rev E 2018; 98:022217. [PMID: 30253475 DOI: 10.1103/physreve.98.022217] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Indexed: 12/27/2022]
Abstract
Networks of neurons can generate oscillatory activity as result of various types of coupling that lead to synchronization. A prominent type of oscillatory activity is gamma (30-80 Hz) rhythms, which may play an important role in neuronal information processing. Two mechanisms have mainly been proposed for their generation: (1) interneuron network gamma (ING) and (2) pyramidal-interneuron network gamma (PING). In vitro and in vivo experiments have shown that both mechanisms can exist in the same cortical circuits. This raises the questions: How do ING and PING interact when both can in principle occur? Are the network dynamics a superposition, or do ING and PING interact in a nonlinear way and if so, how? In this article, we first generalize the phase representation for nonlinear one-dimensional pulse coupled oscillators as introduced by Mirollo and Strogatz to type II oscillators whose phase response curve (PRC) has zero crossings. We then give a full theoretical analysis for the regular gamma-like oscillations of simple networks consisting of two neural oscillators, an "E neuron" mimicking a synchronized group of pyramidal cells, and an "I neuron" representing such a group of interneurons. Motivated by experimental findings, we choose the E neuron to have a type I PRC [leaky integrate-and-fire (LIF) neuron], while the I neuron has either a type I or type II PRC (LIF or "sine" neuron). The phase representation allows us to define in a simple manner scenarios of interaction between the two neurons, which are independent of the types and the details of the neuron models. The presence of delay in the couplings leads to an increased number of scenarios relevant for gamma-like oscillatory patterns. We analytically derive the set of such scenarios and describe their occurrence in terms of parameter values such as synaptic connectivity and drive to the E and I neurons. The networks can be tuned to oscillate in an ING or PING mode. We focus particularly on the transition region where both rhythms compete to govern the network dynamics and compare with oscillations in reduced networks, which can only generate either ING or PING. Our analytically derived oscillation frequency diagrams indicate that except for small coexistence regions, the networks generate ING if the oscillation frequency of the reduced ING network exceeds that of the reduced PING network, and vice versa. For networks with the LIF I neuron, the network oscillation frequency slightly exceeds the frequencies of corresponding reduced networks, while it lies between them for networks with the sine I neuron. In networks oscillating in ING (PING) mode, the oscillation frequency responds faster to changes in the drive to the I (E) neuron than to changes in the drive to the E (I) neuron. This finding suggests a method to analyze which mechanism governs an observed network oscillation. Notably, also when the network operates in ING mode, the E neuron can spike before the I neuron such that relative spike times of the pyramidal cells and the interneurons alone are not conclusive for distinguishing ING and PING.
Collapse
Affiliation(s)
- Atthaphon Viriyopase
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.,Department of Biophysics, Faculty of Science, Radboud University Nijmegen, Nijmegen, The Netherlands.,Department of Neuroinformatics, Faculty of Science, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Raoul-Martin Memmesheimer
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.,Department of Neuroinformatics, Faculty of Science, Radboud University Nijmegen, Nijmegen, The Netherlands.,Center for Theoretical Neuroscience, Columbia University, New York, New York 10027, USA.,FIAS-Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | - Stan Gielen
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.,Department of Biophysics, Faculty of Science, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
17
|
Satuvuori E, Kreuz T. Which spike train distance is most suitable for distinguishing rate and temporal coding? J Neurosci Methods 2018; 299:22-33. [PMID: 29462713 DOI: 10.1016/j.jneumeth.2018.02.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 01/17/2018] [Accepted: 02/15/2018] [Indexed: 11/20/2022]
Abstract
BACKGROUND It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are spike train distances. These can be divided into spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the SPIKE- and the RI-SPIKE-distance. NEW METHOD We use independent steady-rate Poisson processes as surrogates for spike trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different spike train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? RESULTS Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-SPIKE-distance is the only measure that is sensitive to timing information only. COMPARISON WITH EXISTING METHODS While our results on rate-dependent expectation values for the spike-resolved distances agree with Chicharro et al. (2011), we here go one step further and specifically investigate applicability for very low rates. CONCLUSIONS The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.
Collapse
Affiliation(s)
- Eero Satuvuori
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy; MOVE Research Institute, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.
| | - Thomas Kreuz
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| |
Collapse
|
18
|
Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals. J Neurosci 2017; 36:8985-9000. [PMID: 27559179 DOI: 10.1523/jneurosci.0648-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2016] [Accepted: 07/05/2016] [Indexed: 01/09/2023] Open
Abstract
UNLABELLED In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals.
Collapse
|
19
|
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance. J Neurosci 2017; 36:11097-11106. [PMID: 27798189 DOI: 10.1523/jneurosci.1302-16.2016] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 09/02/2016] [Indexed: 01/06/2023] Open
Abstract
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds.
Collapse
|
20
|
Engaging in a tone-detection task differentially modulates neural activity in the auditory cortex, amygdala, and striatum. Sci Rep 2017; 7:677. [PMID: 28386101 PMCID: PMC5429729 DOI: 10.1038/s41598-017-00819-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 03/14/2017] [Indexed: 11/19/2022] Open
Abstract
The relationship between attention and sensory coding is an area of active investigation. Previous studies have revealed that an animal’s behavioral state can play a crucial role in shaping the characteristics of neural responses in the auditory cortex (AC). However, behavioral modulation of auditory response in brain areas outside the AC is not well studied. In this study, we used the same experimental paradigm to examine the effects of attention on neural activity in multiple brain regions including the primary auditory cortex (A1), posterior auditory field (PAF), amygdala (AMY), and striatum (STR). Single-unit spike activity was recorded while cats were actively performing a tone-detection task or passively listening to the same tones. We found that tone-evoked neural responses in A1 were not significantly affected by task-engagement; however, those in PAF and AMY were enhanced, and those in STR were suppressed. The enhanced effect was associated with an improvement of accuracy of tone detection, which was estimated from the spike activity. Additionally, the firing rates of A1 and PAF neurons decreased upon motor response (licking) during the detection task. Our results suggest that attention may have different effects on auditory responsive brain areas depending on their physiological functions.
Collapse
|
21
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
22
|
Lyzwa D, Wörgötter F. Neural and Response Correlations to Complex Natural Sounds in the Auditory Midbrain. Front Neural Circuits 2016; 10:89. [PMID: 27891078 PMCID: PMC5102906 DOI: 10.3389/fncir.2016.00089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Accepted: 10/21/2016] [Indexed: 11/13/2022] Open
Abstract
How natural communication sounds are spatially represented across the inferior colliculus, the main center of convergence for auditory information in the midbrain, is not known. The neural representation of the acoustic stimuli results from the interplay of locally differing input and the organization of spectral and temporal neural preferences that change gradually across the nucleus. This raises the question of how similar the neural representation of the communication sounds is across these gradients of neural preferences, and whether it also changes gradually. Analyzed neural recordings were multi-unit cluster spike trains from guinea pigs presented with a spectrotemporally rich set of eleven species-specific communication sounds. Using cross-correlation, we analyzed the response similarity of spiking activity across a broad frequency range for neurons of similar and different frequency tuning. Furthermore, we separated the contribution of the stimulus to the correlations to investigate whether similarity is only attributable to the stimulus, or, whether interactions exist between the multi-unit clusters that lead to neural correlations and whether these follow the same representation as the response correlations. We found that similarity of responses is dependent on the neurons' spatial distance for similarly and differently frequency-tuned neurons, and that similarity decreases gradually with spatial distance. Significant neural correlations exist, and contribute to the total response similarity. Our findings suggest that for multi-unit clusters in the mammalian inferior colliculus, the gradual response similarity with spatial distance to natural complex sounds is shaped by neural interactions and the gradual organization of neural preferences.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Department of Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Physics Department, Institute for Nonlinear Dynamics, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics-Biophysics, Georg-August UniversityGöttingen, Germany
| |
Collapse
|
23
|
Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model(1,2,3). eNeuro 2016; 3:eN-NWR-0086-15. [PMID: 26866056 PMCID: PMC4745179 DOI: 10.1523/eneuro.0086-15.2015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Revised: 12/16/2015] [Accepted: 12/18/2015] [Indexed: 12/04/2022] Open
Abstract
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
Collapse
|
24
|
Lyzwa D, Herrmann JM, Wörgötter F. Natural Vocalizations in the Mammalian Inferior Colliculus are Broadly Encoded by a Small Number of Independent Multi-Units. Front Neural Circuits 2016; 9:91. [PMID: 26869890 PMCID: PMC4740783 DOI: 10.3389/fncir.2015.00091] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 12/28/2015] [Indexed: 11/18/2022] Open
Abstract
How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Institute for Nonlinear Dynamics, Physics Department, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - J. Michael Herrmann
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute of Perception, Action and Behavior, School of Informatics, University of EdinburghEdinburgh, UK
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics - Biophysics, Georg-August-UniversityGöttingen, Germany
| |
Collapse
|
25
|
Zhao Z, Sato Y, Qin L. Response properties of neurons in the cat's putamen during auditory discrimination. Behav Brain Res 2015; 292:448-62. [PMID: 26162752 DOI: 10.1016/j.bbr.2015.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 06/27/2015] [Accepted: 07/02/2015] [Indexed: 11/30/2022]
Abstract
The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association.
Collapse
Affiliation(s)
- Zhenling Zhao
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan; Jinan Biomedicine R&D Center, School of Life Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China
| | - Yu Sato
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan
| | - Ling Qin
- Department of Physiology, China Medical University, Shenyang 110001, People's Republic of China.
| |
Collapse
|
26
|
Abstract
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.
Collapse
|
27
|
Faghihi F, Moustafa AA. Impaired homeostatic regulation of feedback inhibition associated with system deficiency to detect fluctuation in stimulus intensity: a simulation study. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.11.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
28
|
Gaucher Q, Edeline JM. Stimulus-specific effects of noradrenaline in auditory cortex: implications for the discrimination of communication sounds. J Physiol 2014; 593:1003-20. [PMID: 25398527 DOI: 10.1113/jphysiol.2014.282855] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 11/02/2014] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Many studies have described the action of Noradrenaline (NA) on the properties of cortical receptive fields, but none has assessed how NA affects the discrimination abilities of cortical cells between natural stimuli. In the present study, we compared the consequences of NA topical application on spectro-temporal receptive fields (STRFs) and responses to communication sounds in the primary auditory cortex. NA application reduced the STRFs (an effect replicated by the alpha1 agonist Phenylephrine) but did not change, on average, the responses to communication sounds. For cells exhibiting increased evoked responses during NA application, the discrimination abilities were enhanced as quantified by Mutual Information. The changes induced by NA on parameters extracted from the STRFs and from responses to communication sounds were not related. ABSTRACT The alterations exerted by neuromodulators on neuronal selectivity have been the topic of a vast literature in the visual, somatosensory, auditory and olfactory cortices. However, very few studies have investigated to what extent the effects observed when testing these functional properties with artificial stimuli can be transferred to responses evoked by natural stimuli. Here, we tested the effect of noradrenaline (NA) application on the responses to pure tones and communication sounds in the guinea-pig primary auditory cortex. When pure tones were used to assess the spectro-temporal receptive field (STRF) of cortical cells, NA triggered a transient reduction of the STRFs in both the spectral and the temporal domain, an effect replicated by the α1 agonist phenylephrine whereas α2 and β agonists induced STRF expansion. When tested with communication sounds, NA application did not produce significant effects on the firing rate and spike timing reliability, despite the fact that α1, α2 and β agonists by themselves had significant effects on these measures. However, the cells whose evoked responses were increased by NA application displayed enhanced discriminative abilities. These cells had initially smaller STRFs than the rest of the population. A principal component analysis revealed that the variations of parameters extracted from the STRF and those extracted from the responses to natural stimuli were not correlated. These results suggest that probing the action of neuromodulators on cortical cells with artificial stimuli does not allow us to predict their action on responses to natural stimuli.
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, , Université Paris-Sud, Bâtiment 446, 91405, Orsay cedex, France
| | | |
Collapse
|
29
|
Tang C, Chehayeb D, Srivastava K, Nemenman I, Sober SJ. Millisecond-scale motor encoding in a cortical vocal area. PLoS Biol 2014; 12:e1002018. [PMID: 25490022 PMCID: PMC4260785 DOI: 10.1371/journal.pbio.1002018] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2014] [Accepted: 10/24/2014] [Indexed: 12/28/2022] Open
Abstract
Analyzing brain activity in songbirds suggests that the nervous system controls behavior by precisely modulating the timing pattern of electrical events. Studies of motor control have almost universally examined firing rates to investigate how the brain shapes behavior. In principle, however, neurons could encode information through the precise temporal patterning of their spike trains as well as (or instead of) through their firing rates. Although the importance of spike timing has been demonstrated in sensory systems, it is largely unknown whether timing differences in motor areas could affect behavior. We tested the hypothesis that significant information about trial-by-trial variations in behavior is represented by spike timing in the songbird vocal motor system. We found that neurons in motor cortex convey information via spike timing far more often than via spike rate and that the amount of information conveyed at the millisecond timescale greatly exceeds the information available from spike counts. These results demonstrate that information can be represented by spike timing in motor circuits and suggest that timing variations evoke differences in behavior. A central question in neuroscience is how neurons use patterns of electrical events to represent sensory information and control behavior. Neurons might use two different codes to transmit information. First, signals might be conveyed by the total number of electrical events (called “action potentials”) that a neuron produces. Alternately, the timing pattern of action potentials, as distinct from the total number of action potentials produced, might be used to transmit information. Although many studies have shown that timing can convey information about sensory inputs, such as visual scenery or sound waveforms, the role of action potential timing in the control of complex, learned behaviors is largely unknown. Here, by analyzing the pattern of action potentials produced in a songbird's brain as it precisely controls vocal behavior, we demonstrate that far more information about upcoming behavior is present in spike timing than in the total number of spikes fired. This work suggests that timing can be equally (or more) important in motor systems as in sensory systems.
Collapse
Affiliation(s)
- Claire Tang
- Neuroscience Graduate Program, University of California, San Francisco, San Francisco, California, United States of America
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| | - Diala Chehayeb
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| | - Kyle Srivastava
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, Georgia, United States of America
| | - Ilya Nemenman
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
| | - Samuel J. Sober
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- * E-mail:
| |
Collapse
|
30
|
Dimitrov AG, Cummins GI, Mayko ZM, Portfors CV. Inhibition does not affect the timing code for vocalizations in the mouse auditory midbrain. Front Physiol 2014; 5:140. [PMID: 24795640 PMCID: PMC3997027 DOI: 10.3389/fphys.2014.00140] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2013] [Accepted: 03/23/2014] [Indexed: 11/13/2022] Open
Abstract
Many animals use a diverse repertoire of complex acoustic signals to convey different types of information to other animals. The information in each vocalization therefore must be coded by neurons in the auditory system. One way in which the auditory system may discriminate among different vocalizations is by having highly selective neurons, where only one or two different vocalizations evoke a strong response from a single neuron. Another strategy is to have specific spike timing patterns for particular vocalizations such that each neural response can be matched to a specific vocalization. Both of these strategies seem to occur in the auditory midbrain of mice. The neural mechanisms underlying rate and time coding are unclear, however, it is likely that inhibition plays a role. Here, we examined whether inhibition is involved in shaping neural selectivity to vocalizations via rate and/or time coding in the mouse inferior colliculus (IC). We examined extracellular single unit responses to vocalizations before and after iontophoretically blocking GABAA and glycine receptors in the IC of awake mice. We then applied a number of neurometrics to examine the rate and timing information of individual neurons. We initially evaluated the neuronal responses using inspection of the raster plots, spike-counting measures of response rate and stimulus preference, and a measure of maximum available stimulus-response mutual information. Subsequently, we used two different event sequence distance measures, one based on vector space embedding, and one derived from the Victor/Purpura D q metric, to direct hierarchical clustering of responses. In general, we found that the most salient feature of pharmacologically blocking inhibitory receptors in the IC was the lack of major effects on the functional properties of IC neurons. Blocking inhibition did increase response rate to vocalizations, as expected. However, it did not significantly affect spike timing, or stimulus selectivity of the studied neurons. We observed two main effects when inhibition was locally blocked: (1) Highly selective neurons maintained their selectivity and the information about the stimuli did not change, but response rate increased slightly. (2) Neurons that responded to multiple vocalizations in the control condition, also responded to the same stimuli in the test condition, with similar timing and pattern, but with a greater number of spikes. For some neurons the information rate generally increased, but the information per spike decreased. In many of these neurons, vocalizations that generated no responses in the control condition generated some response in the test condition. Overall, we found that inhibition in the IC does not play a substantial role in creating the distinguishable and reliable neuronal temporal spike patterns in response to different vocalizations.
Collapse
Affiliation(s)
- Alexander G Dimitrov
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Graham I Cummins
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Zachary M Mayko
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| |
Collapse
|
31
|
Kim SY, Lim W. Realistic thermodynamic and statistical-mechanical measures for neural synchronization. J Neurosci Methods 2014; 226:161-170. [DOI: 10.1016/j.jneumeth.2013.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Revised: 12/27/2013] [Accepted: 12/29/2013] [Indexed: 10/25/2022]
|
32
|
Maddox RK, Sen K, Billimoria CP. Auditory forebrain neurons track temporal features of time-warped natural stimuli. J Assoc Res Otolaryngol 2013; 15:131-8. [PMID: 24129604 DOI: 10.1007/s10162-013-0418-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2013] [Accepted: 09/19/2013] [Indexed: 11/26/2022] Open
Abstract
A fundamental challenge for sensory systems is to recognize natural stimuli despite stimulus variations. A compelling example occurs in speech, where the auditory system can recognize words spoken at a wide range of speeds. To date, there have been more computational models for time-warp invariance than experimental studies that investigate responses to time-warped stimuli at the neural level. Here, we address this problem in the model system of zebra finches anesthetized with urethane. In behavioral experiments, we found high discrimination accuracy well beyond the observed natural range of song variations. We artificially sped up or slowed down songs (preserving pitch) and recorded auditory responses from neurons in field L, the avian primary auditory cortex homolog. We found that field L neurons responded robustly to time-warped songs, tracking the temporal features of the stimuli over a broad range of warp factors. Time-warp invariance was not observed per se, but there was sufficient information in the neural responses to reliably classify which of two songs was presented. Furthermore, the average spike rate was close to constant over the range of time warps, contrary to recent modeling predictions. We discuss how this response pattern is surprising given current computational models of time-warp invariance and how such a response could be decoded downstream to achieve time-warp-invariant recognition of sounds.
Collapse
Affiliation(s)
- Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, 1715 NE Columbia Rd, Box 357988, Seattle, WA, 98195, USA
| | | | | |
Collapse
|
33
|
Cortical inhibition reduces information redundancy at presentation of communication sounds in the primary auditory cortex. J Neurosci 2013; 33:10713-28. [PMID: 23804094 DOI: 10.1523/jneurosci.0079-13.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10-50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli.
Collapse
|
34
|
Behavioral modulation of neural encoding of click-trains in the primary and nonprimary auditory cortex of cats. J Neurosci 2013. [PMID: 23926266 DOI: 10.1523/jneurosci.1724-13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Neural representation of acoustic stimuli in the mammal auditory cortex (AC) has been extensively studied using anesthetized or awake nonbehaving animals. Recently, several studies have shown that active engagement in an auditory behavioral task can substantially change the neuron response properties compared with when animals were passively listening to the same sounds; however, these studies mainly investigated the effect of behavioral state on the primary auditory cortex and the reported effects were inconsistent. Here, we examined the single-unit spike activities in both the primary and nonprimary areas along the dorsal-to-ventral direction of the cat's AC, when the cat was actively discriminating click-trains at different repetition rates and when it was passively listening to the same stimuli. We found that the changes due to task engagement were heterogeneous in the primary AC; some neurons showed significant increases in driven firing rate, others showed decreases. But in the nonprimary AC, task engagement predominantly enhanced the neural responses, resulting in a substantial improvement of the neural discriminability of click-trains. Additionally, our results revealed that neural responses synchronizing to click-trains gradually decreased along the dorsal-to-ventral direction of cat AC, while nonsynchronizing responses remained less changed. The present study provides new insights into the hierarchical organization of AC along the dorsal-to-ventral direction and highlights the importance of using behavioral animals to investigate the later stages of cortical processing.
Collapse
|
35
|
Timme N, Alford W, Flecker B, Beggs JM. Synergy, redundancy, and multivariate information measures: an experimentalist’s perspective. J Comput Neurosci 2013; 36:119-40. [DOI: 10.1007/s10827-013-0458-4] [Citation(s) in RCA: 130] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Revised: 04/26/2013] [Accepted: 04/29/2013] [Indexed: 11/29/2022]
|
36
|
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear Res 2013; 305:45-56. [PMID: 23726970 DOI: 10.1016/j.heares.2013.05.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Revised: 03/23/2013] [Accepted: 05/11/2013] [Indexed: 11/23/2022]
Abstract
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
37
|
Amin N, Gastpar M, Theunissen FE. Selective and efficient neural coding of communication signals depends on early acoustic and social environment. PLoS One 2013; 8:e61417. [PMID: 23630587 PMCID: PMC3632581 DOI: 10.1371/journal.pone.0061417] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2013] [Accepted: 03/13/2013] [Indexed: 11/18/2022] Open
Abstract
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.
Collapse
Affiliation(s)
- Noopur Amin
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, United States of America
| | - Michael Gastpar
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, California, United States of America
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, United States of America
- Psychology Department, University of California, Berkeley, California, United States of America
- * E-mail:
| |
Collapse
|
38
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|
39
|
Grimsley JMS, Shanbhag SJ, Palmer AR, Wallace MN. Processing of communication calls in Guinea pig auditory cortex. PLoS One 2012; 7:e51646. [PMID: 23251604 PMCID: PMC3520958 DOI: 10.1371/journal.pone.0051646] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Accepted: 11/08/2012] [Indexed: 11/25/2022] Open
Abstract
Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.
Collapse
Affiliation(s)
- Jasmine M. S. Grimsley
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Sharad J. Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Alan R. Palmer
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
| | - Mark N. Wallace
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
40
|
Woolley SMN. Early experience shapes vocal neural coding and perception in songbirds. Dev Psychobiol 2012; 54:612-31. [PMID: 22711657 PMCID: PMC3404257 DOI: 10.1002/dev.21014] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2010] [Accepted: 01/09/2012] [Indexed: 11/09/2022]
Abstract
Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production, and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception, and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals.
Collapse
Affiliation(s)
- Sarah M N Woolley
- Department of Psychology, Columbia University, 406 Schermerhorn Hall, 1190 Amsterdam Ave., New York, NY 10027, USA.
| |
Collapse
|
41
|
Pfeiffer M, Hartbauer M, Lang AB, Maass W, Römer H. Probing real sensory worlds of receivers with unsupervised clustering. PLoS One 2012; 7:e37354. [PMID: 22701566 PMCID: PMC3368931 DOI: 10.1371/journal.pone.0037354] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Accepted: 04/19/2012] [Indexed: 11/18/2022] Open
Abstract
The task of an organism to extract information about the external environment from sensory signals is based entirely on the analysis of ongoing afferent spike activity provided by the sense organs. We investigate the processing of auditory stimuli by an acoustic interneuron of insects. In contrast to most previous work we do this by using stimuli and neurophysiological recordings directly in the nocturnal tropical rainforest, where the insect communicates. Different from typical recordings in sound proof laboratories, strong environmental noise from multiple sound sources interferes with the perception of acoustic signals in these realistic scenarios. We apply a recently developed unsupervised machine learning algorithm based on probabilistic inference to find frequently occurring firing patterns in the response of the acoustic interneuron. We can thus ask how much information the central nervous system of the receiver can extract from bursts without ever being told which type and which variants of bursts are characteristic for particular stimuli. Our results show that the reliability of burst coding in the time domain is so high that identical stimuli lead to extremely similar spike pattern responses, even for different preparations on different dates, and even if one of the preparations is recorded outdoors and the other one in the sound proof lab. Simultaneous recordings in two preparations exposed to the same acoustic environment reveal that characteristics of burst patterns are largely preserved among individuals of the same species. Our study shows that burst coding can provide a reliable mechanism for acoustic insects to classify and discriminate signals under very noisy real-world conditions. This gives new insights into the neural mechanisms potentially used by bushcrickets to discriminate conspecific songs from sounds of predators in similar carrier frequency bands.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria.
| | | | | | | | | |
Collapse
|
42
|
Maddox RK, Billimoria CP, Perrone BP, Shinn-Cunningham BG, Sen K. Competing sound sources reveal spatial effects in cortical processing. PLoS Biol 2012; 10:e1001319. [PMID: 22563301 PMCID: PMC3341327 DOI: 10.1371/journal.pbio.1001319] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Accepted: 03/20/2012] [Indexed: 11/18/2022] Open
Abstract
Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex) encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.
Collapse
Affiliation(s)
- Ross K. Maddox
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Cyrus P. Billimoria
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| | - Ben P. Perrone
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| | - Barbara G. Shinn-Cunningham
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts, United States of America
| | - Kamal Sen
- Hearing Research Center, Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Center for Biodynamics, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
43
|
Johnson JS, Yin P, O'Connor KN, Sutter ML. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis. J Neurophysiol 2012; 107:3325-41. [PMID: 22422997 DOI: 10.1152/jn.00812.2011] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, Univ. of California at Davis, Davis, CA 95618, USA
| | | | | | | |
Collapse
|
44
|
Shetake JA, Wolf JT, Cheung RJ, Engineer CT, Ram SK, Kilgard MP. Cortical activity patterns predict robust speech discrimination ability in noise. Eur J Neurosci 2011; 34:1823-38. [PMID: 22098331 DOI: 10.1111/j.1460-9568.2011.07887.x] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem.
Collapse
Affiliation(s)
- Jai A Shetake
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41 Richardson, TX 75080-3021, USA
| | | | | | | | | | | |
Collapse
|
45
|
Larson E, Maddox RK, Perrone BP, Sen K, Billimoria CP. Neuron-specific stimulus masking reveals interference in spike timing at the cortical level. J Assoc Res Otolaryngol 2011; 13:81-9. [PMID: 21964794 DOI: 10.1007/s10162-011-0292-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2011] [Accepted: 09/08/2011] [Indexed: 11/29/2022] Open
Abstract
The auditory system is capable of robust recognition of sounds in the presence of competing maskers (e.g., other voices or background music). This capability arises despite the fact that masking stimuli can disrupt neural responses at the cortical level. Since the origins of such interference effects remain unknown, in this study, we work to identify and quantify neural interference effects that originate due to masking occurring within and outside receptive fields of neurons. We record from single and multi-unit auditory sites from field L, the auditory cortex homologue in zebra finches. We use a novel method called spike timing-based stimulus filtering that uses the measured response of each neuron to create an individualized stimulus set. In contrast to previous adaptive experimental approaches, which have typically focused on the average firing rate, this method uses the complete pattern of neural responses, including spike timing information, in the calculation of the receptive field. When we generate and present novel stimuli for each neuron that mask the regions within the receptive field, we find that the time-varying information in the neural responses is disrupted, degrading neural discrimination performance and decreasing spike timing reliability and sparseness. We also find that, while removing stimulus energy from frequency regions outside the receptive field does not significantly affect neural responses for many sites, adding a masker in these frequency regions can nonetheless have a significant impact on neural responses and discriminability without a significant change in the average firing rate. These findings suggest that maskers can interfere with neural responses by disrupting stimulus timing information with power either within or outside the receptive fields of neurons.
Collapse
Affiliation(s)
- Eric Larson
- Department of Biomedical Engineering, Hearing Research Center, Boston University, Boston, MA 02215, USA.
| | | | | | | | | |
Collapse
|
46
|
Naud R, Gerhard F, Mensi S, Gerstner W. Improved similarity measures for small sets of spike trains. Neural Comput 2011; 23:3016-69. [PMID: 21919785 DOI: 10.1162/neco_a_00208] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multiple measures have been developed to quantify the similarity between two spike trains. These measures have been used for the quantification of the mismatch between neuron models and experiments as well as for the classification of neuronal responses in neuroprosthetic devices and electrophysiological experiments. Frequently only a few spike trains are available in each class. We derive analytical expressions for the small-sample bias present when comparing estimators of the time-dependent firing intensity. We then exploit analogies between the comparison of firing intensities and previously used spike train metrics and show that improved spike train measures can be successfully used for fitting neuron models to experimental data, for comparisons of spike trains, and classification of spike train data. In classification tasks, the improved similarity measures can increase the recovered information. We demonstrate that when similarity measures are used for fitting mathematical models, all previous methods systematically underestimate the noise. Finally, we show a striking implication of this deterministic bias by reevaluating the results of the single-neuron prediction challenge.
Collapse
Affiliation(s)
- Richard Naud
- Brain Mind Institute and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland.
| | | | | | | |
Collapse
|
47
|
Schumacher JW, Schneider DM, Woolley SMN. Anesthetic state modulates excitability but not spectral tuning or neural discrimination in single auditory midbrain neurons. J Neurophysiol 2011; 106:500-14. [PMID: 21543752 PMCID: PMC3154814 DOI: 10.1152/jn.01072.2010] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2010] [Accepted: 05/03/2011] [Indexed: 11/22/2022] Open
Abstract
The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals.
Collapse
Affiliation(s)
- Joseph W Schumacher
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY 10027, USA
| | | | | |
Collapse
|
48
|
Li Z, Ouyang G, Li D, Li X. Characterization of the causality between spike trains with permutation conditional mutual information. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 84:021929. [PMID: 21929040 DOI: 10.1103/physreve.84.021929] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2011] [Revised: 07/12/2011] [Indexed: 05/31/2023]
Abstract
Uncovering the causal relationship between spike train recordings from different neurons is a key issue for understanding the neural coding. This paper presents a method, called permutation conditional mutual information (PCMI), for characterizing the causality between a pair of neurons. The performance of this method is demonstrated with the spike trains generated by the Poisson point process model and the Izhikevich neuronal model, including estimation of the directionality index and detection of the temporal dynamics of the causal link. Simulations show that the PCMI method is superior to the transfer entropy and causal entropy methods at identifying the coupling direction between the spike trains. The advantages of PCMI are twofold: It is able to estimate the directionality index under the weak coupling and against the missing and extra spikes.
Collapse
Affiliation(s)
- Zhaohui Li
- Institute of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | | | | | | |
Collapse
|
49
|
Sarro EC, Rosen MJ, Sanes DH. Taking advantage of behavioral changes during development and training to assess sensory coding mechanisms. Ann N Y Acad Sci 2011; 1225:142-54. [PMID: 21535001 DOI: 10.1111/j.1749-6632.2011.06023.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The relationship between behavioral and neural performance has been explored in adult animals, but rarely during the developmental period when perceptual abilities emerge. We used these naturally occurring changes in auditory perception to evaluate underlying encoding mechanisms. Performance of juvenile and adult gerbils on an amplitude modulation (AM) detection task was compared with response properties from auditory cortex of age-matched animals. When tested with an identical behavioral procedure, juveniles display poorer AM detection thresholds than adults. Two neurometric analyses indicate that the most sensitive juvenile and adult neurons have equivalent AM thresholds. However, a pooling neurometric revealed that adult cortex encodes smaller AM depths. By each measure, neural sensitivity was superior to psychometric thresholds. However, juvenile training improved adult behavioral thresholds, such that they verged on the best sensitivity of adult neurons. Thus, periods of training may allow an animal to use the encoded information already present in cortex.
Collapse
Affiliation(s)
- Emma C Sarro
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
50
|
Woolley SMN, Moore JM. Coevolution in communication senders and receivers: vocal behavior and auditory processing in multiple songbird species. Ann N Y Acad Sci 2011; 1225:155-65. [PMID: 21535002 DOI: 10.1111/j.1749-6632.2011.05989.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Communication is a strong selective pressure on brain evolution because the exchange of information between individuals is crucial for fitness-related behaviors, such as mating. Given the importance of communication, the brains of signal senders and receivers are likely to be functionally coordinated. We study vocal behavior and auditory processing in multiple species of estrildid finches with the goal of understanding how species identity and early experience interact to shape the neural systems that subserve communication. Male finches learn to produce species-specific songs, and both sexes learn to recognize songs. Our studies indicate that closely related species exhibit different auditory coding properties in the midbrain and forebrain and that early life experience of vocalizations contributes to these differences. Moreover, birds that naturally sing tonal songs can learn broadband songs from heterospecific tutors, providing an opportunity to examine the interplay between species identity and early experience in the development of vocal behavior and auditory tuning.
Collapse
Affiliation(s)
- Sarah M N Woolley
- Department of Psychology, Columbia University, New York, New York, USA.
| | | |
Collapse
|