1
|
Carlyon RP, Deeks JM, Delgutte B, Chung Y, Vollmer M, Ohl FW, Kral A, Tillein J, Litovsky RY, Schnupp J, Rosskothen-Kuhl N, Goldsworthy RL. Limitations on Temporal Processing by Cochlear Implant Users: A Compilation of Viewpoints. Trends Hear 2025; 29:23312165251317006. [PMID: 40095543 DOI: 10.1177/23312165251317006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2025] Open
Abstract
Cochlear implant (CI) users are usually poor at using timing information to detect changes in either pitch or sound location. This deficit occurs even for listeners with good speech perception and even when the speech processor is bypassed to present simple, idealized stimuli to one or more electrodes. The present article presents seven expert opinion pieces on the likely neural bases for these limitations, the extent to which they are modifiable by sensory experience and training, and the most promising ways to overcome them in future. The article combines insights from physiology and psychophysics in cochlear-implanted humans and animals, highlights areas of agreement and controversy, and proposes new experiments that could resolve areas of disagreement.
Collapse
Affiliation(s)
- Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - John M Deeks
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
| | - Yoojin Chung
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, USA
| | - Maike Vollmer
- Department of Experimental Audiology, University Clinic of Otolaryngology, Head and Neck Surgery, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Frank W Ohl
- Leibniz Institute for Neurobiology (LIN), Magdeburg, Germany
| | - Andrej Kral
- Institute of Audio-Neuro-Technology & Department of Experimental Otology, Clinics of Otolaryngology, Head and Neck Surgery, Hannover Medical School, Hannover, Germany
| | - Jochen Tillein
- Clinics of Otolaryngology, Head and Neck Surgery, J.W.Goethe University, Frankfurt, Germany
- MedEl Company, Hannover, Germany
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Jan Schnupp
- Gerald Choa Neuroscience Institute and Department of Otolaryngology, Chinese University of Hong Kong, Hong Kong (NB Hong Kong is a Special Administrative Region) of China
| | - Nicole Rosskothen-Kuhl
- Neurobiological Research Laboratory, Section for Experimental and Clinical Otology, Department of Oto-Rhino-Laryngology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Ignatious E, Azam S, Jonkman M, De Boer F. Binaural masking level difference for pure tone signals. J Otol 2023; 18:160-167. [PMID: 37497326 PMCID: PMC10366637 DOI: 10.1016/j.joto.2023.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 06/11/2023] [Indexed: 07/28/2023] Open
Abstract
The binaural masking level difference (BMLD) is a psychoacoustic method to determine binaural interaction and central auditory processes. The BMLD is the difference in hearing thresholds in homophasic and antiphasic conditions. The duration, phase and frequency of the stimuli can affect the BMLD. The main aim of the study is to evaluate the BMLD for stimuli of different durations and frequencies which could also be used in future electrophysiological studies. To this end we developed a GUI to present different frequency signals of variable duration and determine the BMLD. Three different durations and five different frequencies are explored. The results of the study confirm that the hearing threshold for the antiphasic condition is lower than the hearing threshold for the homophasic condition and that differences are significant for signals of 18ms and 48ms duration. Future objective binaural processing studies will be based on 18ms and 48ms stimuli with the same frequencies as used in the current study.
Collapse
|
3
|
Demany L, Monteiro G, Semal C, Shamma S, Carlyon RP. The perception of octave pitch affinity and harmonic fusion have a common origin. Hear Res 2021; 404:108213. [PMID: 33662686 PMCID: PMC7614450 DOI: 10.1016/j.heares.2021.108213] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 02/05/2021] [Accepted: 02/10/2021] [Indexed: 02/06/2023]
Abstract
Musicians say that the pitches of tones with a frequency ratio of 2:1 (one octave) have a distinctive affinity, even if the tones do not have common spectral components. It has been suggested, however, that this affinity judgment has no biological basis and originates instead from an acculturation process ‒ the learning of musical rules unrelated to auditory physiology. We measured, in young amateur musicians, the perceptual detectability of octave mistunings for tones presented alternately (melodic condition) or simultaneously (harmonic condition). In the melodic condition, mistuning was detectable only by means of explicit pitch comparisons. In the harmonic condition, listeners could use a different and more efficient perceptual cue: in the absence of mistuning, the tones fused into a single sound percept; mistunings decreased fusion. Performance was globally better in the harmonic condition, in line with the hypothesis that listeners used a fusion cue in this condition; this hypothesis was also supported by results showing that an illusory simultaneity of the tones was much less advantageous than a real simultaneity. In the two conditions, mistuning detection was generally better for octave compressions than for octave stretchings. This asymmetry varied across listeners, but crucially the listener-specific asymmetries observed in the two conditions were highly correlated. Thus, the perception of the melodic octave appeared to be closely linked to the phenomenon of harmonic fusion. As harmonic fusion is thought to be determined by biological factors rather than factors related to musical culture or training, we argue that octave pitch affinity also has, at least in part, a biological basis.
Collapse
Affiliation(s)
- Laurent Demany
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France.
| | - Guilherme Monteiro
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France
| | - Catherine Semal
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS, EPHE, and Université de Bordeaux, Bordeaux, France; Bordeaux INP, Bordeaux, France.
| | - Shihab Shamma
- Institute for Systems Research, University of Maryland, College Park, MD, United States; Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France.
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom.
| |
Collapse
|
4
|
White-Schwoch T, Krizman J, Nicol T, Kraus N. Case studies in neuroscience: cortical contributions to the frequency-following response depend on subcortical synchrony. J Neurophysiol 2020; 125:273-281. [PMID: 33206575 DOI: 10.1152/jn.00104.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Frequency-following responses to musical notes spanning the octave 65-130 Hz were elicited in a person with auditory neuropathy, a disorder of subcortical neural synchrony, and a control subject. No phaselocked responses were observed in the person with auditory neuropathy. The control subject had robust responses synchronized to the fundamental frequency and its harmonics. Cortical onset responses to each note in the series were present in both subjects. These results support the hypothesis that subcortical neural synchrony is necessary to generate the frequency-following response-including for stimulus frequencies at which a cortical contribution has been noted. Although auditory cortex ensembles may synchronize to fundamental frequency cues in speech and music, subcortical neural synchrony appears to be a necessary antecedent.NEW & NOTEWORTHY A listener with auditory neuropathy, an absence of subcortical neural synchrony, did not have electrophysiological frequency-following responses synchronized to an octave of musical notes, with fundamental frequencies ranging from 65 to 130 Hz. A control subject had robust responses that phaselocked to each note. Although auditory cortex may contribute to the scalp-recorded frequency-following response in healthy listeners, our results suggest this phenomenon depends on subcortical neural synchrony.
Collapse
Affiliation(s)
- Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois.,Departments of Neurobiology and Otolaryngology, Northwestern University, Evanston, Illinois
| |
Collapse
|
5
|
Rahman M, Willmore BDB, King AJ, Harper NS. Simple transformations capture auditory input to cortex. Proc Natl Acad Sci U S A 2020; 117:28442-28451. [PMID: 33097665 PMCID: PMC7668077 DOI: 10.1073/pnas.1922033117] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| |
Collapse
|
6
|
Coffey EBJ, Nicol T, White-Schwoch T, Chandrasekaran B, Krizman J, Skoe E, Zatorre RJ, Kraus N. Evolving perspectives on the sources of the frequency-following response. Nat Commun 2019; 10:5036. [PMID: 31695046 PMCID: PMC6834633 DOI: 10.1038/s41467-019-13003-w] [Citation(s) in RCA: 117] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 10/14/2019] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, 1455 Boulevard de Maisonneuve Ouest, Montréal, QC, H3G 1M8, Canada.
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Bharath Chandrasekaran
- Communication Sciences and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Forbes Tower, 3600 Atwood St, Pittsburgh, PA, 15260, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, The Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 2 Alethia Drive, Unit 1085, Storrs, CT, 06269, USA
| | - Robert J Zatorre
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
- Montreal Neurological Institute, McGill University, 3801 rue Université, Montréal, QC, H3A 2B4, Canada
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
- Department of Neurobiology, Northwestern University, 2205 Tech Dr., Evanston, IL, 60208, USA
- Department of Otolaryngology, Northwestern University, 420 E Superior St., Chicago, IL, 6011, USA
| |
Collapse
|
7
|
Ross B, Tremblay KL, Alain C. Simultaneous EEG and MEG recordings reveal vocal pitch elicited cortical gamma oscillations in young and older adults. Neuroimage 2019; 204:116253. [PMID: 31600592 DOI: 10.1016/j.neuroimage.2019.116253] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 09/13/2019] [Accepted: 10/06/2019] [Indexed: 10/25/2022] Open
Abstract
The frequency-following response with origin in the auditory brainstem represents the pitch contour of voice and can be recorded with electrodes from the scalp. MEG studies also revealed a cortical contribution to the high gamma oscillations at the fundamental frequency (f0) of a vowel stimulus. Therefore, studying the cortical component of the frequency-following response could provide insights into how pitch information is encoded at the cortical level. Comparing how aging affects the different responses may help to uncover the neural mechanisms underlying speech understanding deficits in older age. We simultaneously recorded EEG and MEG responses to the syllable /ba/. MEG beamformer analysis localized sources in bilateral auditory cortices and the midbrain. Time-frequency analysis showed a faithful representation of the pitch contour between 106 Hz and 138 Hz in the cortical activity. A cross-correlation revealed a latency of 20 ms. Furthermore, stimulus onsets elicited cortical 40-Hz responses. Both the 40-Hz and the f0 response amplitudes increased in older age and were larger in the right hemisphere. The effects of aging and laterality of the f0 response were evident in the MEG only, suggesting that both effects were characteristics of the cortical response. After comparing f0 and N1 responses in EEG and MEG, we estimated that approximately one-third of the scalp-recorded f0 response could be cortical in origin. We attributed the significance of the cortical f0 response to the precise timing of cortical neurons that serve as a time-sensitive code for pitch.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department for Medical Biophysics, University of Toronto, Ontario, Canada.
| | - Kelly L Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Ontario, Canada
| |
Collapse
|
8
|
Abstract
Supplemental Digital Content is available in the text. Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very little attention has been given to auditory neuropathic complications in DM. The aim of this study was to determine whether type 1 DM (T1DM) affects neural coding of the rapid temporal fluctuations of sounds, and how any deficits may impact on behavioral performance. Design: Participants were 30 young normal-hearing T1DM patients, and 30 age-, sex-, and audiogram-matched healthy controls. Measurements included electrophysiological measures of auditory nerve and brainstem function using the click-evoked auditory brainstem response, and of brainstem neural temporal coding using the sustained frequency-following response (FFR); behavioral tests of temporal coding (interaural phase difference discrimination and the frequency difference limen); tests of speech perception in noise; and self-report measures of auditory disability using the Speech, Spatial and Qualities of Hearing Scale. Results: There were no significant differences between T1DM patients and controls in the auditory brainstem response. However, the T1DM group showed significantly reduced FFRs to both temporal envelope and temporal fine structure. The T1DM group also showed significantly higher interaural phase difference and frequency difference limen thresholds, worse speech-in-noise performance, as well as lower overall Speech, Spatial and Qualities scores than the control group. Conclusions: These findings suggest that T1DM is associated with degraded neural temporal coding in the brainstem in the absence of an elevation in audiometric threshold, and that the FFR may provide an early indicator of neural damage in T1DM, before any abnormalities can be identified using standard clinical tests. However, the relation between the neural deficits and the behavioral deficits is uncertain.
Collapse
|
9
|
Marangio L, Galatolo S, Fronzoni L, Chillemi S, Di Garbo A. Phase-locking patterns in a resonate and fire neural model with periodic drive. Biosystems 2019; 184:103992. [PMID: 31323255 DOI: 10.1016/j.biosystems.2019.103992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 06/11/2019] [Accepted: 07/11/2019] [Indexed: 11/25/2022]
Abstract
In this paper we studied a resonate and fire relaxation oscillator subject to time dependent modulation to investigate phase-locking phenomena occurring in neurophysiological systems. The neural model (denoted LFHN) was obtained by linearization of the FitzHugh-Nagumo neural model near an hyperbolic fixed point and then by introducing an integrate-and-fire mechanism for spike generation. By employing specific tools to study circle maps, we showed that this system exhibits several phase-locking patterns in the presence of periodic perturbations. Moreover, both the amplitude and frequency of the modulation strongly impact its phase-locking properties. In addition, general conditions for the generation of firing activity were also obtained. In addition, it was shown that for moderate noise levels the phase-locking patterns of the LFHN persist. Moreover, in the presence of noise, the rotation number changes smoothly as the stimulation current increases. Then, the statistical properties of the firing map were investigated too. Lastly, the results obtained with the forced LFHN suggest that such neural model could be used to fit specific experimental data on the firing times of neurons.
Collapse
Affiliation(s)
- Luigi Marangio
- Department of Mathematics, University of Pisa, Italy; Femto-ST Institute, Université de Bourgogne-Franche Comté, France
| | | | | | | | | |
Collapse
|
10
|
Siveke I, Lingner A, Ammer JJ, Gleiss SA, Grothe B, Felmy F. A Temporal Filter for Binaural Hearing Is Dynamically Adjusted by Sound Pressure Level. Front Neural Circuits 2019; 13:8. [PMID: 30814933 PMCID: PMC6381077 DOI: 10.3389/fncir.2019.00008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 01/24/2019] [Indexed: 12/02/2022] Open
Abstract
In natural environments our auditory system is exposed to multiple and diverse signals of fluctuating amplitudes. Therefore, to detect, localize, and single out individual sounds the auditory system has to process and filter spectral and temporal information from both ears. It is known that the overall sound pressure level affects sensory signal transduction and therefore the temporal response pattern of auditory neurons. We hypothesize that the mammalian binaural system utilizes a dynamic mechanism to adjust the temporal filters in neuronal circuits to different overall sound pressure levels. Previous studies proposed an inhibitory mechanism generated by the reciprocally coupled dorsal nuclei of the lateral lemniscus (DNLL) as a temporal neuronal-network filter that suppresses rapid binaural fluctuations. Here we investigated the consequence of different sound levels on this filter during binaural processing. Our in vivo and in vitro electrophysiology in Mongolian gerbils shows that the integration of ascending excitation and contralateral inhibition defines the temporal properties of this inhibitory filter. The time course of this filter depends on the synaptic drive, which is modulated by the overall sound pressure level and N-methyl-D-aspartate receptor (NMDAR) signaling. In psychophysical experiments we tested the temporal perception of humans and show that detection and localization of two subsequent tones changes with the sound pressure level consistent with our physiological results. Together our data support the hypothesis that mammals dynamically adjust their time window for sound detection and localization within the binaural system in a sound level dependent manner.
Collapse
Affiliation(s)
- Ida Siveke
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany.,Institute of Zoology and Neurobiology, Ruhr-Universität Bochum, Bochum, Germany
| | - Andrea Lingner
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Julian J Ammer
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School for Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Sarah A Gleiss
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School for Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Benedikt Grothe
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School for Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Felix Felmy
- Department Biology II, Division of Neurobiology, Ludwig-Maximilians-Universität München, Munich, Germany.,Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| |
Collapse
|
11
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
12
|
Abstract
How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| |
Collapse
|
13
|
Bidelman GM. Subcortical sources dominate the neuroelectric auditory frequency-following response to speech. Neuroimage 2018; 175:56-69. [PMID: 29604459 DOI: 10.1016/j.neuroimage.2018.03.060] [Citation(s) in RCA: 154] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 03/26/2018] [Indexed: 11/16/2022] Open
Abstract
Frequency-following responses (FFRs) are neurophonic potentials that provide a window into the encoding of complex sounds (e.g., speech/music), auditory disorders, and neuroplasticity. While the neural origins of the FFR remain debated, renewed controversy has reemerged after demonstration that FFRs recorded via magnetoencephalography (MEG) are dominated by cortical rather than brainstem structures as previously assumed. Here, we recorded high-density (64 ch) FFRs via EEG and applied state-of-the art source imaging techniques to multichannel data (discrete dipole modeling, distributed imaging, independent component analysis, computational simulations). Our data confirm a mixture of generators localized to bilateral auditory nerve (AN), brainstem inferior colliculus (BS), and bilateral primary auditory cortex (PAC). However, frequency-specific scrutiny of source waveforms showed the relative contribution of these nuclei to the aggregate FFR varied across stimulus frequencies. Whereas AN and BS sources produced robust FFRs up to ∼700 Hz, PAC showed weak phase-locking with little FFR energy above the speech fundamental (100 Hz). Notably, CLARA imaging further showed PAC activation was eradicated for FFRs >150 Hz, above which only subcortical sources remained active. Our results show (i) the site of FFR generation varies critically with stimulus frequency; and (ii) opposite the pattern observed in MEG, subcortical structures make the largest contribution to electrically recorded FFRs (AN ≥ BS > PAC). We infer that cortical dominance observed in previous neuromagnetic data is likely due to the bias of MEG to superficial brain tissue, underestimating subcortical structures that drive most of the speech-FFR. Cleanly separating subcortical from cortical FFRs can be achieved by ensuring stimulus frequencies are >150-200 Hz, above the phase-locking limit of cortical neurons.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
14
|
Amplitude modulation rate dependent topographic organization of the auditory steady-state response in human auditory cortex. Hear Res 2017; 354:102-108. [PMID: 28917446 DOI: 10.1016/j.heares.2017.09.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 08/06/2017] [Accepted: 09/08/2017] [Indexed: 11/22/2022]
Abstract
Periodic modulations of an acoustic feature, such as amplitude over a certain frequency range, leads to phase locking of neural responses to the envelope of the modulation. Using electrophysiological methods this neural activity pattern, also called the auditory steady-state response (aSSR), is visible following frequency transformation of the evoked response as a clear spectral peak at the modulation frequency. Despite several studies employing the aSSR that show, for example, strongest responses for ∼40 Hz and an overall right-hemispheric dominance, it has not been investigated so far to what extent within auditory cortex different modulation frequencies elicit aSSRs at a homogenous source or whether the localization of the aSSR is topographically organized in a systematic manner. The latter would be suggested by previous neuroimaging works in monkeys and humans showing a periodotopic organization within and across distinct auditory fields. However, the sluggishness of the signal from these neuroimaging works prohibit inferences with regards to the fine-temporal features of the neural response. In the present study, we employed amplitude-modulated (AM) sounds over a range between 4 and 85 Hz to elicit aSSRs while recording brain activity via magnetoencephalography (MEG). Using beamforming and a fine spatially resolved grid restricted to auditory cortical processing regions, our study revealed a topographic representation of the aSSR that depends on AM rate, in particular in the medial-lateral (bilateral) and posterior-anterior (right auditory cortex) direction. In summary, our findings confirm previous studies that showing different AM rates to elicit maximal response in distinct neural populations. They extend these findings however by also showing that these respective neural ensembles in auditory cortex actually phase lock their activity over a wide modulation frequency range.
Collapse
|
15
|
Yamagishi S, Otsuka S, Furukawa S, Kashino M. Subcortical correlates of auditory perceptual organization in humans. Hear Res 2016; 339:104-11. [PMID: 27371867 DOI: 10.1016/j.heares.2016.06.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Revised: 06/22/2016] [Accepted: 06/27/2016] [Indexed: 11/25/2022]
Abstract
To make sense of complex auditory scenes, the auditory system sequentially organizes auditory components into perceptual objects or streams. In the conventional view of this process, the cortex plays a major role in perceptual organization, and subcortical mechanisms merely provide the cortex with acoustical features. Here, we show that the neural activities of the brainstem are linked to perceptual organization, which alternates spontaneously for human listeners without any stimulus change. The stimulus used in the experiment was an unchanging sequence of repeated triplet tones, which can be interpreted as either one or two streams. Listeners were instructed to report the perceptual states whenever they experienced perceptual switching between one and two streams throughout the stimulus presentation. Simultaneously, we recorded event related potentials with scalp electrodes. We measured the frequency-following response (FFR), which is considered to originate from the brainstem. We also assessed thalamo-cortical activity through the middle-latency response (MLR). The results demonstrate that the FFR and MLR varied with the state of auditory stream perception. In addition, we found that the MLR change precedes the FFR change with perceptual switching from a one-stream to a two-stream percept. This suggests that there are top-down influences on brainstem activity from the thalamo-cortical pathway. These findings are consistent with the idea of a distributed, hierarchical neural network for perceptual organization and suggest that the network extends to the brainstem level.
Collapse
Affiliation(s)
- Shimpei Yamagishi
- Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, 226-8503, Japan.
| | - Sho Otsuka
- NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan.
| | - Shigeto Furukawa
- NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan.
| | - Makio Kashino
- Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, 226-8503, Japan; NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan.
| |
Collapse
|
16
|
Marsh JE, Campbell TA. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model. Front Neurosci 2016; 10:136. [PMID: 27242396 PMCID: PMC4861936 DOI: 10.3389/fnins.2016.00136] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Accepted: 03/17/2016] [Indexed: 11/13/2022] Open
Abstract
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.
Collapse
Affiliation(s)
- John E Marsh
- School of Psychology, University of Central LancashirePreston, UK; Department of Building, Energy and Environmental Engineering, University of GävleGävle, Sweden
| | - Tom A Campbell
- Neuroscience Center, University of Helsinki Helsinki, Finland
| |
Collapse
|
17
|
Cai R, Caspary DM. GABAergic inhibition shapes SAM responses in rat auditory thalamus. Neuroscience 2015; 299:146-55. [PMID: 25943479 PMCID: PMC4457678 DOI: 10.1016/j.neuroscience.2015.04.062] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 04/27/2015] [Accepted: 04/27/2015] [Indexed: 01/03/2023]
Abstract
Auditory thalamus (medial geniculate body [MGB]) receives ascending inhibitory GABAergic inputs from inferior colliculus (IC) and descending GABAergic projections from the thalamic reticular nucleus (TRN) with both inputs postulated to play a role in shaping temporal responses. Previous studies suggested that enhanced processing of temporally rich stimuli occurs at the level of MGB, with our recent study demonstrating enhanced GABA sensitivity in MGB compared to IC. The present study used sinusoidal amplitude-modulated (SAM) stimuli to generate modulation transfer functions (MTFs), to examine the role of GABAergic inhibition in shaping the response properties of MGB single units in anesthetized rats. Rate MTFs (rMTFs) were parsed into "bandpass (BP)", "mixed (Mixed)", "highpass (HP)" or "atypical" response types, with most units showing the Mixed response type. GABAA receptor blockade with iontophoretic application of the GABAA receptor (GABAAR) antagonist gabazine (GBZ) selectively altered the response properties of most MGB neurons examined. Mixed and HP units showed significant GABAAR-mediated SAM-evoked rate response changes at higher modulation frequencies (fms), which were also altered by N-methyl-d-aspartic acid (NMDA) receptor blockade (2R)-amino-5-phosphonopentanoate (AP5). BP units, and the lower arm of Mixed units responded to GABAAR blockade with increased responses to SAM stimuli at or near the rate best modulation frequency (rBMF). The ability of GABA circuits to shape responses at higher modulation frequencies is an emergent property of MGB units, not observed at lower levels of the auditory pathway and may reflect activation of MGB NMDA receptors (Rabang and Bartlett, 2011; Rabang et al., 2012). Together, GABAARs exert selective rate control over selected fms, generally without changing the units' response type. These results showed that coding of modulated stimuli at the level of auditory thalamus is at least, in part, strongly controlled by GABA neurotransmission, in delicate balance with glutamatergic neurotransmission.
Collapse
Affiliation(s)
- R Cai
- Southern Illinois University School of Medicine, Department of Pharmacology, Springfield, IL, United States
| | - D M Caspary
- Southern Illinois University School of Medicine, Department of Pharmacology, Springfield, IL, United States.
| |
Collapse
|
18
|
Eggermont JJ. Animal models of auditory temporal processing. Int J Psychophysiol 2015; 95:202-15. [DOI: 10.1016/j.ijpsycho.2014.03.011] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2013] [Revised: 03/27/2014] [Accepted: 03/27/2014] [Indexed: 10/25/2022]
|
19
|
Suta D, Popelář J, Burianová J, Syka J. Cortical representation of species-specific vocalizations in Guinea pig. PLoS One 2013; 8:e65432. [PMID: 23785425 PMCID: PMC3681779 DOI: 10.1371/journal.pone.0065432] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2012] [Accepted: 04/30/2013] [Indexed: 11/18/2022] Open
Abstract
We investigated the representation of four typical guinea pig vocalizations in the auditory cortex (AI) in anesthetized guinea pigs with the aim to compare cortical data to the data already published for identical calls in subcortical structures - the inferior colliculus (IC) and medial geniculate body (MGB). Like the subcortical neurons also cortical neurons typically responded to many calls with a time-locked response to one or more temporal elements of the calls. The neuronal response patterns in the AI correlated well with the sound temporal envelope of chirp (an isolated short phrase), but correlated less well in the case of chutter and whistle (longer calls) or purr (a call with a fast repetition rate of phrases). Neuronal rate vs. characteristic frequency profiles provided only a coarse representation of the calls' frequency spectra. A comparison between the activity in the AI and those of subcortical structures showed a different transformation of the neuronal response patterns from the IC to the AI for individual calls: i) while the temporal representation of chirp remained unchanged, the representations of whistle and chutter were transformed at the thalamic level and the response to purr at the cortical level; ii) for the wideband calls (whistle, chirp) the rate representation of the call spectra was preserved in the AI and MGB at the level present in the IC, while in the case of low-frequency calls (chutter, purr), the representation was less precise in the AI and MGB than in the IC; iii) the difference in the response strength to natural and time-reversed whistle was found to be smaller in the AI than in the IC or MGB.
Collapse
Affiliation(s)
- Daniel Suta
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | | | | | | |
Collapse
|
20
|
MK-801 disrupts and nicotine augments 40 Hz auditory steady state responses in the auditory cortex of the urethane-anesthetized rat. Neuropharmacology 2013; 73:1-9. [PMID: 23688921 DOI: 10.1016/j.neuropharm.2013.05.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2012] [Revised: 03/30/2013] [Accepted: 05/06/2013] [Indexed: 01/29/2023]
Abstract
Patients with schizophrenia show marked deficits in processing sensory inputs including a reduction in the generation and synchronization of 40 Hz gamma oscillations in response to steady-state auditory stimulation. Such deficits are not readily demonstrable at other input frequencies. Acute administration of NMDA antagonists to healthy human subjects or laboratory animals is known to reproduce many sensory and cognitive deficits seen in schizophrenia patients. In the following study, we tested the hypothesis that the NMDA antagonist MK-801 would selectively disrupt steady-state gamma entrainment in the auditory cortex of urethane-anesthetized rat. Moreover, we further hypothesized that nicotinic receptor activation would alleviate this disruption. Auditory steady state responses were recorded in response to auditory stimuli delivered over a range of frequencies (10-80 Hz) and averaged over 50 trials. Evoked power was computed under baseline condition and after vehicle or MK-801 (0.03 mg/kg, iv). MK-801 produced a significant attenuation in response to 40 Hz auditory stimuli while entrainment to other frequencies was not affected. Time-frequency analysis revealed deficits in both power and phase-locking to 40 Hz. Nicotine (0.1 mg/kg, iv) administered after MK-801 reversed the attenuation of the 40 Hz response. Administered alone, nicotine augmented 40 Hz steady state power and phase-locking. Nicotine's effects were blocked by simultaneous administration of the α4β2 antagonist DHßE. Thus we report for the first time, a rodent model that mimics a core neurophysiological deficit seen in patients with schizophrenia and a pharmacological approach to alleviate it.
Collapse
|
21
|
Abstract
Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues.
Collapse
|
22
|
Schneider AD, Cullen KE, Chacron MJ. In vivo conditions induce faithful encoding of stimuli by reducing nonlinear synchronization in vestibular sensory neurons. PLoS Comput Biol 2011; 7:e1002120. [PMID: 21814508 PMCID: PMC3140969 DOI: 10.1371/journal.pcbi.1002120] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Accepted: 05/26/2011] [Indexed: 12/04/2022] Open
Abstract
Previous studies have shown that neurons within the vestibular nuclei (VN) can faithfully encode the time course of sensory input through changes in firing rate in vivo. However, studies performed in vitro have shown that these same VN neurons often display nonlinear synchronization (i.e. phase locking) in their spiking activity to the local maxima of sensory input, thereby severely limiting their capacity for faithful encoding of said input through changes in firing rate. We investigated this apparent discrepancy by studying the effects of in vivo conditions on VN neuron activity in vitro using a simple, physiologically based, model of cellular dynamics. We found that membrane potential oscillations were evoked both in response to step and zap current injection for a wide range of channel conductance values. These oscillations gave rise to a resonance in the spiking activity that causes synchronization to sinusoidal current injection at frequencies below 25 Hz. We hypothesized that the apparent discrepancy between VN response dynamics measured in in vitro conditions (i.e., consistent with our modeling results) and the dynamics measured in vivo conditions could be explained by an increase in trial-to-trial variability under in vivo vs. in vitro conditions. Accordingly, we mimicked more physiologically realistic conditions in our model by introducing a noise current to match the levels of resting discharge variability seen in vivo as quantified by the coefficient of variation (CV). While low noise intensities corresponding to CV values in the range 0.04-0.24 only eliminated synchronization for low (<8 Hz) frequency stimulation but not high (>12 Hz) frequency stimulation, higher noise intensities corresponding to CV values in the range 0.5-0.7 almost completely eliminated synchronization for all frequencies. Our results thus predict that, under natural (i.e. in vivo) conditions, the vestibular system uses increased variability to promote fidelity of encoding by single neurons. This prediction can be tested experimentally in vitro.
Collapse
Affiliation(s)
| | | | - Maurice J. Chacron
- Department of Physics, McGill University, Montreal, Quebec, Canada
- Department of Physiology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
23
|
Siveke I, Leibold C, Kaiser K, Grothe B, Wiegrebe L. Level-dependent latency shifts quantified through binaural processing. J Neurophysiol 2010; 104:2224-35. [PMID: 20702738 DOI: 10.1152/jn.00392.2010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The mammalian binaural system compares the timing of monaural inputs with microsecond precision. This temporal precision is required for localizing sounds in azimuth. However, temporal features of the monaural inputs, in particular their latencies, highly depend on the overall sound level. In a combined psychophysical, electrophysiological, and modeling approach, we investigate how level-dependent latency shifts of the monaural responses are reflected in the perception and neural representation of interaural time differences. We exploit the sensitivity of the binaural system to the timing of high-frequency stimuli with binaurally incongruent envelopes. Using these novel stimuli, both the perceptually adjusted interaural time differences and the time differences extracted from electrophysiological recordings systematically depend on overall sound pressure level. The perceptual and electrophysiological time differences of the envelopes can be explained in an existing model of temporal integration only if a level-dependent firing threshold is added. Such an adjustment of firing threshold provides a temporally accurate neural code of the temporal structure of a stimulus and its binaural disparities independent of overall sound level.
Collapse
Affiliation(s)
- Ida Siveke
- Division of Neurobiology, Department Biologie II, Ludwig-Maximilians-Universität München, Germany
| | | | | | | | | |
Collapse
|
24
|
Wallace MN, Coomber B, Sumner CJ, Grimsley JMS, Shackleton TM, Palmer AR. Location of cells giving phase-locked responses to pure tones in the primary auditory cortex. Hear Res 2010; 274:142-51. [PMID: 20630479 DOI: 10.1016/j.heares.2010.05.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2010] [Revised: 05/23/2010] [Accepted: 05/24/2010] [Indexed: 11/30/2022]
Abstract
Phase-locked responses to pure tones have previously been described in the primary auditory cortex (AI) of the guinea pig. They are interesting because they show that some cells may use a temporal code for representing sounds of 60-300 Hz rather than the rate or place mechanisms used over most of AI. Our previous study had shown that the phase-locked responses were grouped together, but it was not clear whether they were in separate minicolumns or a larger macrocolumn. We now show that the phase-locked cells are arranged in a macrocolumn within AI that forms a subdivision of the isofrequency bands. Phase-locked responses were recorded from 158 multiunits using silicon based multiprobes with four shanks. The phase-locked units gave the strongest response in layers III/IV but phase-locked units were also recorded in layers II, V and VI. The column included cells with characteristic frequencies of 80 Hz-1.3 kHz (0.5-0.8 mm long) and was about 0.5 mm wide. It was located at a constant position at the intersection of the coronal plane 1 mm caudal to bregma and the suture that forms the lateral edge of the parietal bone.
Collapse
Affiliation(s)
- M N Wallace
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK.
| | | | | | | | | | | |
Collapse
|
25
|
Bendor D, Wang X. Neural coding of periodicity in marmoset auditory cortex. J Neurophysiol 2010; 103:1809-22. [PMID: 20147419 DOI: 10.1152/jn.00281.2009] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys (Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
Collapse
Affiliation(s)
- Daniel Bendor
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Bldg. 46, Rm. 5233, 43 Vassar St., Cambridge, MA, USA.
| | | |
Collapse
|
26
|
Chandrasekaran B, Kraus N. The scalp-recorded brainstem response to speech: neural origins and plasticity. Psychophysiology 2009; 47:236-46. [PMID: 19824950 DOI: 10.1111/j.1469-8986.2009.00928.x] [Citation(s) in RCA: 319] [Impact Index Per Article: 19.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Considerable progress has been made in our understanding of the remarkable fidelity with which the human auditory brainstem represents key acoustic features of the speech signal. The brainstem response to speech can be assessed noninvasively by examining scalp-recorded evoked potentials. Morphologically, two main components of the scalp-recorded brainstem response can be differentiated, a transient onset response and a sustained frequency-following response (FFR). Together, these two components are capable of conveying important segmental and suprasegmental information inherent in the typical speech syllable. Here we examine the putative neural sources of the scalp-recorded brainstem response and review recent evidence that demonstrates that the brainstem response to speech is dynamic in nature and malleable by experience. Finally, we propose a putative mechanism for experience-dependent plasticity at the level of the brainstem.
Collapse
|
27
|
Abstract
How the brain processes temporal information embedded in sounds is a core question in auditory research. This article synthesizes recent studies from our laboratory regarding neural representations of time-varying signals in auditory cortex and thalamus in awake marmoset monkeys. Findings from these studies show that 1) the primary auditory cortex (A1) uses a temporal representation to encode slowly varying acoustic signals and a firing rate-based representation to encode rapidly changing acoustic signals, 2) the dual temporal-rate representation in A1 represent a progressive transformation from the auditory thalamus, 3) firing rate-based representations in the form of a monotonic rate-code are also found to encode slow temporal repetitions in the range of acoustic flutter in A1 and more prevalently in the cortical fields rostral to A1 in the core region of the marmoset auditory cortex, suggesting further temporal-to-rate transformations in higher cortical areas. These findings indicate that the auditory cortex forms internal representations of temporal characteristic structures. We suggest that such transformations are necessary for the auditory cortex to perform a wide range of functions including sound segmentation, object processing and multi-sensory integration.
Collapse
|
28
|
Wang X, Lu T, Bendor D, Bartlett E. Neural coding of temporal information in auditory thalamus and cortex. Neuroscience 2008; 154:294-303. [PMID: 18555164 DOI: 10.1016/j.neuroscience.2008.03.065] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2007] [Revised: 03/24/2008] [Accepted: 03/24/2008] [Indexed: 10/22/2022]
Abstract
How the brain processes temporal information embedded in sounds is a core question in auditory research. This article synthesizes recent studies from our laboratory regarding neural representations of time-varying signals in auditory cortex and thalamus in awake marmoset monkeys. Findings from these studies show that 1) the primary auditory cortex (A1) uses a temporal representation to encode slowly varying acoustic signals and a firing rate-based representation to encode rapidly changing acoustic signals, 2) the dual temporal-rate representations in A1 represent a progressive transformation from the auditory thalamus, 3) firing rate-based representations in the form of monotonic rate-code are also found to encode slow temporal repetitions in the range of acoustic flutter in A1 and more prevalently in the cortical fields rostral to A1 in the core region of marmoset auditory cortex, suggesting further temporal-to-rate transformations in higher cortical areas. These findings indicate that the auditory cortex forms internal representations of temporal characteristics of sounds that are no longer faithful replicas of their acoustic structures. We suggest that such transformations are necessary for the auditory cortex to perform a wide range of functions including sound segmentation, object processing and multi-sensory integration.
Collapse
Affiliation(s)
- X Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Avenue, Traylor 410, Baltimore, MD 21205, USA.
| | | | | | | |
Collapse
|
29
|
Middlebrooks JC. Auditory cortex phase locking to amplitude-modulated cochlear implant pulse trains. J Neurophysiol 2008; 100:76-91. [PMID: 18367697 DOI: 10.1152/jn.01109.2007] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cochlear implant speech processors transmit temporal features of sound as amplitude modulation of constant-rate electrical pulse trains. This study evaluated the central representation of amplitude modulation in the form of phase-locked firing of neurons in the auditory cortex. Anesthetized pigmented guinea pigs were implanted with cochlear electrode arrays. Stimuli were 254 pulse/s (pps) trains of biphasic electrical pulses, sinusoidally modulated with frequencies of 10-64 Hz and modulation depths of -40 to -5 dB re 100% (i.e., 1-56.2% modulation). Single- and multiunit activity was recorded from multi-site silicon-substrate probes. The maximum frequency for significant phase locking (limiting modulation frequency) was >or=60 Hz for 42% of recording sites, whereas phase locking to pulses of unmodulated pulse trains rarely exceeded 30 pps. The strength of phase locking to frequencies >or=40 Hz often varied nonmonotonically with modulation depth, commonly peaking at modulation depths around -15 to -10 dB. Cortical phase locking coded modulation frequency reliably, whereas a putative rate code for frequency was confounded by rate changes with modulation depth. Group delay computed from the slope of mean phase versus modulation frequency tended to increase with decreasing limiting modulation frequency. Neurons in cortical extragranular layers had lower limiting modulation frequencies than did neurons in thalamic afferent layers. Those observations suggest that the low-pass characteristic of cortical phase locking results from intracortical filtering mechanisms. The results show that cortical neurons can phase lock to modulated electrical pulse trains across the range of modulation frequencies and depths presented by cochlear implant speech processors.
Collapse
Affiliation(s)
- John C Middlebrooks
- Kresge Hearing Research Institute, Department of Otolaryngology Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan, USA.
| |
Collapse
|
30
|
Wallace MN, Anderson LA, Palmer AR. Phase-locked responses to pure tones in the auditory thalamus. J Neurophysiol 2007; 98:1941-52. [PMID: 17699690 DOI: 10.1152/jn.00697.2007] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Accurate temporal coding of low-frequency tones by spikes that are locked to a particular phase of the sine wave (phase-locking), occurs among certain groups of neurons at various processing levels in the brain. Phase-locked responses have previously been studied in the inferior colliculus and neocortex of the guinea pig and we now describe the responses in the auditory thalamus. Recordings were made from 241 single units, 32 (13%) of which showed phase-locked responses. Units with phase-locked responses were mainly (82%) located in the ventral division of the medial geniculate body (MGB), and also the medial division (18%), but were not found in the dorsal or shell divisions. The upper limiting frequency of phase-locking varied greatly between units (60-1,100 Hz) and between anatomical divisions. The upper limit in the ventral division was 520 Hz and in the medial was 1,100 Hz. The range of steady-state delays calculated from phase plots also varied: ventral division, 8.6-14 ms (mean 11.1 ms; SD 1.56); medial division, 7.5-11 ms (mean 9.3 ms; SD 1.5). Taken together, these measurements are consistent with the medial division receiving a phase-locked input directly from the brain stem, without an obligatory relay in the inferior colliculus. Cells in both the ventral and medial divisions of the MGB showed a response that phase-locked to the fundamental frequency of a guinea pig purr and may be involved in analyzing communication calls.
Collapse
Affiliation(s)
- Mark N Wallace
- Medical Research Council, Institute of Hearing Research, University Park, Nottingham, UK.
| | | | | |
Collapse
|
31
|
Palmer AR, Hall DA, Sumner C, Barrett DJK, Jones S, Nakamoto K, Moore DR. Some investigations into non-passive listening. Hear Res 2007; 229:148-57. [PMID: 17275232 DOI: 10.1016/j.heares.2006.12.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2006] [Revised: 12/07/2006] [Accepted: 12/07/2006] [Indexed: 10/23/2022]
Abstract
Our knowledge of the function of the auditory nervous system is based upon a wealth of data obtained, for the most part, in anaesthetised animals. More recently, it has been generally acknowledged that factors such as attention profoundly modulate the activity of sensory systems and this can take place at many levels of processing. Imaging studies, in particular, have revealed the greater activation of auditory areas and areas outside of sensory processing areas when attending to a stimulus. We present here a brief review of the consequences of such non-passive listening and go on to describe some of the experiments we are conducting to investigate them. In imaging studies, using fMRI, we can demonstrate the activation of attention networks that are non-specific to the sensory modality as well as greater and different activation of the areas of the supra-temporal plane that includes primary and secondary auditory areas. The profuse descending connections of the auditory system seem likely to be part of the mechanisms subserving attention to sound. These are generally thought to be largely inactivated by anaesthesia. However, we have been able to demonstrate that even in an anaesthetised preparation, removing the descending control from the cortex leads to quite profound changes in the temporal patterns of activation by sounds in thalamus and inferior colliculus. Some of these effects seem to be specific to the ear of stimulation and affect interaural processing. To bridge these observations we are developing an awake behaving preparation involving freely moving animals in which it will be possible to investigate the effects of consciousness (by contrasting awake and anaesthetized), passive and active listening.
Collapse
Affiliation(s)
- A R Palmer
- MRC Institute of Hearing Research, University Park, Nottingham, UK.
| | | | | | | | | | | | | |
Collapse
|
32
|
Christianson GB, Peña JL. Preservation of spectrotemporal tuning between the nucleus laminaris and the inferior colliculus of the barn owl. J Neurophysiol 2007; 97:3544-53. [PMID: 17314241 PMCID: PMC2532515 DOI: 10.1152/jn.01162.2006] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Performing sound recognition is a task that requires an encoding of the time-varying spectral structure of the auditory stimulus. Similarly, computation of the interaural time difference (ITD) requires knowledge of the precise timing of the stimulus. Consistent with this, low-level nuclei of birds and mammals implicated in ITD processing encode the ongoing phase of a stimulus. However, the brain areas that follow the binaural convergence for the computation of ITD show a reduced capacity for phase locking. In addition, we have shown that in the barn owl there is a pooling of ITD-responsive neurons to improve the reliability of ITD coding. Here we demonstrate that despite two stages of convergence and an effective loss of phase information, the auditory system of the anesthetized barn owl displays a graceful transition to an envelope coding that preserves the spectrotemporal information throughout the ITD pathway to the neurons of the core of the central nucleus of the inferior colliculus.
Collapse
|
33
|
Abstract
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.
| |
Collapse
|
34
|
Schofield BR, Coomes DL, Schofield RM. Cells in auditory cortex that project to the cochlear nucleus in guinea pigs. J Assoc Res Otolaryngol 2006; 7:95-109. [PMID: 16557424 PMCID: PMC2504579 DOI: 10.1007/s10162-005-0025-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2004] [Accepted: 12/07/2005] [Indexed: 11/28/2022] Open
Abstract
Fluorescent retrograde tracers were used to identify the cells in auditory cortex that project directly to the cochlear nucleus (CN). Following injection of a tracer into the CN, cells were labeled bilaterally in primary auditory cortex and the dorsocaudal auditory field as well as several surrounding fields. On both sides, the cells were limited to layer V. The size of labeled cell bodies varied considerably, suggesting that different cell types may project to the CN. Cells ranging from small to medium in size were present bilaterally, whereas the largest cells were labeled only ipsilaterally. In optimal cases, the extent of dendritic labeling was sufficient to identify the morphologic class. Many cells had an apical dendrite that could be traced to a terminal tuft in layer I. Such "tufted" pyramidal cells were identified both ipsilateral and contralateral to the injected CN. The results suggest that the direct pathway from auditory cortex to the cochlear nucleus is substantial and is likely to play a role in modulating the way the cochlear nucleus processes acoustic stimuli.
Collapse
Affiliation(s)
- Brett R Schofield
- Department of Neurobiology, Northeastern Ohio Universities College of Medicine, 4209 St. Rt. 44, P.O. Box 95, Rootstown, OH 44272, USA.
| | | | | |
Collapse
|
35
|
Wallace MN, Shackleton TM, Anderson LA, Palmer AR. Representation of the purr call in the guinea pig primary auditory cortex. Hear Res 2006; 204:115-26. [PMID: 15925197 DOI: 10.1016/j.heares.2005.01.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2003] [Accepted: 01/18/2005] [Indexed: 11/26/2022]
Abstract
Guinea pigs produce the low-frequency purr or rumble call as an alerting signal. A digitised example of the call was presented to anaesthetised guinea pigs via a closed sound system while recording from the primary auditory cortex. The exemplar used in this study had 9 regular phrases each spaced with their centres about 80 ms apart. Low-frequency (1.1 kHz) units responded best to the call but within this population there were four separate groups: (1) cells that responded vigorously to many or all of the 9 phrases; (2) cells that gave an onset response; (3) cells that only responded to a click embedded in the call; (4) cells that did not respond. Particular response types were often grouped together. Thus when orthogonal electrode tracks were used most units gave a similar response. There was no correlation between the type of response and the cortical depth. A similar range of response types was also found in the thalamus and there was no evidence of a distinct response in the cortex that was due to intracortical processing. Cells in the cortex were able to represent the temporal structure of the purr with the same fidelity as cells in the thalamus.
Collapse
Affiliation(s)
- Mark N Wallace
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK.
| | | | | | | |
Collapse
|
36
|
Jones SJ. Two ways of hearing--dissociation between spectral and temporal processes in the auditory cortex. SUPPLEMENTS TO CLINICAL NEUROPHYSIOLOGY 2006; 59:89-95. [PMID: 16893098 DOI: 10.1016/s1567-424x(09)70017-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Affiliation(s)
- S J Jones
- Department of Clinical Neurophysiology, The National Hospital for Neurology and Neurosurgery, Queen Square, London, UK.
| |
Collapse
|
37
|
Liu LF, Palmer AR, Wallace MN. Phase-locked responses to pure tones in the inferior colliculus. J Neurophysiol 2005; 95:1926-35. [PMID: 16339005 DOI: 10.1152/jn.00497.2005] [Citation(s) in RCA: 98] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the auditory system, some ascending pathways preserve the precise timing information present in a temporal code of frequency. This can be measured by studying responses that are phase-locked to the stimulus waveform. At each stage along a pathway, there is a reduction in the upper frequency limit of the phase-locking and an increase in the steady-state latency. In the guinea pig, phase-locked responses to pure tones have been described at various levels from auditory nerve to neocortex but not in the inferior colliculus (IC). Therefore we made recordings from 161 single units in guinea pig IC. Of these single units, 68% (110/161) showed phase-locked responses. Cells that phase-locked were mainly located in the central nucleus but also occurred in the dorsal cortex and external nucleus. The upper limiting frequency of phase-locking varied greatly between units (80-1,034 Hz) and between anatomical divisions. The upper limits in the three divisions were central nucleus, >1,000 Hz; dorsal cortex, 700 Hz; external nucleus, 320 Hz. The mean latencies also varied and were central nucleus, 8.2 +/- 2.8 (SD) ms; dorsal cortex, 17.2 ms; external nucleus, 13.3 ms. We conclude that many cells in the central nucleus receive direct inputs from the brain stem, whereas cells in the external and dorsal divisions receive input from other structures that may include the forebrain.
Collapse
Affiliation(s)
- Liang-Fa Liu
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD UK
| | | | | |
Collapse
|
38
|
Hertrich I, Mathiak K, Lutzenberger W, Ackermann H. Transient and phase-locked evoked magnetic fields in response to periodic acoustic signals. Neuroreport 2004; 15:1687-90. [PMID: 15232308 DOI: 10.1097/01.wnr.0000134930.04561.b2] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Using whole-head MEG, time course and hemispheric lateralization effects of phase-locked brain responses to complex periodic acoustic signals (stimulus frequency 13, 22, 40, 67, or 111 Hz) were determined based on a dipole analysis approach. Apart from systematic rate-induced changes in amplitude and shape of the transient evoked magnetic fields (M50, M100), phase-locked brain activity emerged, being more pronounced over the right as compared to the left hemisphere. Furthermore, this MEG component showed a consistent phase angle across subjects, indicating active synchronization mechanisms within auditory cortex that operate upon afferent input. Conceivably, these early side-differences in periodicity encoding contribute to or even snowball into hemispheric lateralization effects of higher-order aspects of central-auditory processing such as melody perception.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Strasse 3, D-72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
39
|
Abstract
Auditory steady-state responses (ASSR) to amplitude modulated (AM) tones with carrier frequencies between 250 and 4000 Hz and modulation frequencies near 40 Hz were recorded using a 37-channel neuro-magnetometer placed above the auditory cortex contralateral to the stimulated right ear. The ASSR sources were likely in the primary auditory cortex, located more anteriorly and more medially than the N1m sources. The ASSR amplitude decreased with increasing carrier frequency, the amplitude at 250 Hz being three times larger than at 4000 Hz. The amplitude of the ASSR to a test sound decreased in the presence of an interfering second AM sound. This suppression of the ASSR to the test stimulus was greater when the carrier frequency of the interfering stimulus was higher than that of the test tone and was greater when the test stimulus had a lower carrier frequency. Similar frequency specificity was observed when the interfering sound was a non-modulated pure tone. These results differ from those found for the ASSR elicited by modulation frequencies above 80 Hz or for the transient brainstem and middle-latency responses and suggest substantial interactions between phase-locked activities at the level of the primary auditory cortex.
Collapse
Affiliation(s)
- Bernhard Ross
- Institute of Biomagnetism and Biosignalanalysis, University Hospital, Kardinal von Galen Ring 10, 48129 Münster, Germany.
| | | | | | | |
Collapse
|
40
|
Elhilali M, Fritz JB, Klein DJ, Simon JZ, Shamma SA. Dynamics of precise spike timing in primary auditory cortex. J Neurosci 2004; 24:1159-72. [PMID: 14762134 PMCID: PMC6793586 DOI: 10.1523/jneurosci.3825-03.2004] [Citation(s) in RCA: 120] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Although single units in primary auditory cortex (A1) exhibit accurate timing in their phasic response to the onset of sound (precision of a few milliseconds), paradoxically, they are unable to sustain synchronized responses to repeated stimuli at rates much beyond 20 Hz. To explore the relationship between these two aspects of cortical response, we designed a broadband stimulus with a slowly modulated spectrotemporal envelope riding on top of a rapidly modulated waveform (or fine structure). Using this stimulus, we quantified the ability of cortical cells to encode independently and simultaneously the stimulus envelope and fine structure. Specifically, by reverse-correlating unit responses with these two stimulus dimensions, we measured the spectrotemporal response fields (STRFs) associated with the processing of the envelope, the fine structure, and the complete stimulus. A1 cells respond well to the slow spectrotemporal envelopes and produce a wide variety of STRFs. In over 70% of cases, A1 units also track the fine-structure modulations precisely, throughout the stimulus, and for frequencies up to several hundred Hertz. Such a dual response, however, is contingent on the cell being driven by both fast and slow modulations, in that the response to the slowly modulated envelope gates the expression of the fine structure. We also demonstrate that either a simplified model of synaptic depression and facilitation, and/or a cortical network of thalamic excitation and cortical inhibition can account for major trends in the observed findings. Finally, we discuss the potential functional significance and perceptual relevance of these coexistent, complementary dynamic response modes.
Collapse
Affiliation(s)
- Mounya Elhilali
- Institute for Systems Research, Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | | | | | | | | |
Collapse
|