1
|
Sabu A, Irvine D, Grayden DB, Fallon J. Ensemble responses of auditory midbrain neurons in the cat to speech stimuli at different signal-to-noise ratios. Hear Res 2025; 456:109163. [PMID: 39657280 DOI: 10.1016/j.heares.2024.109163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 11/13/2024] [Accepted: 12/02/2024] [Indexed: 12/12/2024]
Abstract
Originally reserved for those who are profoundly deaf, cochlear implantation is now common for people with partial hearing loss, particularly when combined with a hearing aid. This combined intervention enhances speech comprehension and sound quality when compared to electrical stimulation alone, particularly in noisy environments, but the physiological basis for the benefits is not well understood. Our long-term aim is to elucidate the underlying physiological mechanisms of this improvement, and as a first step in this process, we have investigated in normal hearing cats, the degree to which the patterns of neural activity evoked in the inferior colliculus (IC) by speech sounds in various levels of noise allows discrimination between those sounds. Neuronal responses were recorded simultaneously from 32 sites across the tonotopic axis of the IC in anaesthetised normal hearing cats (n = 7). Speech sounds were presented at 20, 40 and 60 dB SPL in quiet and with increasing levels of additive noise (signal-to-noise ratios (SNRs) -20, -15, -10, -5, 0, +5, +10, +15, +20 dB). Neural discrimination was assessed using a Euclidean measure of distance between neural responses, resulting in a function reflecting speech sound differentiation across various SNRs. Responses of IC neurons reliably encoded the speech stimuli when presented in quiet, with optimal performance when an analysis bin-width of 5-10 ms was used. Discrimination thresholds did not depend on stimulus level and were best for shorter analysis binwidths. This study sheds light on how the auditory midbrain represents speech sounds and provides baseline data with which responses to electro-acoustic speech sounds in partially deafened animals can be compared.
Collapse
Affiliation(s)
- Anu Sabu
- Bionics Institute, Fitzroy, Victoria, Australia; Medical Bionics Department, The University of Melbourne, Parkville, Victoria, Australia.
| | - Dexter Irvine
- Bionics Institute, Fitzroy, Victoria, Australia; School of Psychological Sciences, Monash University, Clayton, Victoria, Australia
| | - David B Grayden
- Bionics Institute, Fitzroy, Victoria, Australia; Department of Biomedical Engineering and Graeme Clark Institute, The University of Melbourne, Melbourne, Victoria, Australia
| | - James Fallon
- Bionics Institute, Fitzroy, Victoria, Australia; Medical Bionics Department, The University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
2
|
Schryver B, Javier A, Choueiry J, Labelle A, Knott V, Jaworska N. Speech Mismatch Negativity (MMN) in Schizophrenia with Auditory Verbal Hallucinations. Clin EEG Neurosci 2025; 56:106-115. [PMID: 39497433 DOI: 10.1177/15500594241292754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
Auditory verbal hallucinations (AVH) are experienced by many individuals with schizophrenia (SZ), a neurodevelopmental disease that encumbers the quality of life and psychosocial outcome of those afflicted by it. While many hypotheses attempt to better define the etiology of AVHs in SZ, their neural profile and its moderation by current neuroleptics remains limited. The Mismatch Negativity (MMN) is an event related potential (ERP) measured from electroencephalographic (EEG) activity during the presentation of a deviance detection auditory paradigm. The neural regions and activity underlying the generation of the MMN include the primary auditory cortex and the prefrontal cortex which are regions also found to be activated during the experience of AVHs. Decreased MMN amplitudes have been robustly noted in SZ patients during the presentation of MMN tasks using auditory tones. However, the MMN generation to speech stimuli has not been extensively examined in SZ nor in relation to AVHs. The primary objective of this study was to examine the MMN to five speech-based deviants in SZ patients and healthy controls. Second, we assessed MMN features with AVH characteristics in 19 SZ patients and 21 HC. While AVH features did not correlate with measures of MMN, we found decreased MMN amplitudes to speech-based frequency and vowel change deviants in SZ patients compared to HC potentially reflecting deficiencies in basic speech processing mechanisms.
Collapse
Affiliation(s)
| | - Aster Javier
- Department of Neuroscience, Carleton University, Ottawa, ON, Canada
| | - Joëlle Choueiry
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Clinical EEG and Neuroimaging Laboratory, University of Ottawa Institute of Mental Health Research at The Royal, Ottawa, ON, Canada
| | - Alain Labelle
- Schizophrenia Unit, The Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| | - Verner Knott
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
- Department of Neuroscience, Carleton University, Ottawa, ON, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Clinical EEG and Neuroimaging Laboratory, University of Ottawa Institute of Mental Health Research at The Royal, Ottawa, ON, Canada
- Department of Psychiatry, University of Ottawa, Ottawa, ON, Canada
| | - Natalia Jaworska
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
- Department of Neuroscience, Carleton University, Ottawa, ON, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Clinical EEG and Neuroimaging Laboratory, University of Ottawa Institute of Mental Health Research at The Royal, Ottawa, ON, Canada
- Department of Psychiatry, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
3
|
Tamaoki Y, Pasapula V, Danaphongse TT, Reyes AR, Chandler CR, Borland MS, Riley JR, Carroll AM, Engineer CT. Pairing tones with vagus nerve stimulation improves brain stem responses to speech in the valproic acid model of autism. J Neurophysiol 2024; 132:1426-1436. [PMID: 39319784 PMCID: PMC11573256 DOI: 10.1152/jn.00325.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 09/16/2024] [Accepted: 09/20/2024] [Indexed: 09/26/2024] Open
Abstract
Receptive language deficits and aberrant auditory processing are often observed in individuals with autism spectrum disorders (ASD). Symptoms associated with ASD are observed in rodents prenatally exposed to valproic acid (VPA), including deficits in speech sound discrimination ability. These perceptual difficulties are accompanied by changes in neural activity patterns. In both cortical and subcortical levels of the auditory pathway, VPA-exposed rats have impaired responses to speech sounds. Developing a method to improve these neural deficits throughout the auditory pathway is necessary. The purpose of this study was to investigate the ability of vagus nerve stimulation (VNS) paired with sounds to restore degraded inferior colliculus (IC) responses in VPA-exposed rats. VNS paired with the speech sound "dad" was presented to a group of VPA-exposed rats 300 times per day for 20 days. Another group of VPA-exposed rats were presented with VNS paired with multiple tone frequencies for 20 days. The IC responses were recorded from 19 saline-exposed control rats and 18 VPA-exposed with no VNS, 8 VNS-speech paired VPA-exposed, and 7 VNS-tone paired VPA-exposed female and male rats. Pairing VNS with tones increased the IC response strength to speech sounds by 44% compared to VPA-exposed rats alone. Contrarily, VNS-speech pairing significantly decreased the IC response to speech compared with VPA-exposed rats by 5%. The present research indicates that pairing VNS with tones improved sound processing in rats exposed to VPA and suggests that auditory processing can be improved through targeted plasticity.NEW & NOTEWORTHY Pairing vagus nerve stimulation (VNS) with sounds has improved auditory processing in the auditory cortex of normal-hearing rats and autism models of rats. This study tests the ability of VNS-sound pairing to restore auditory processing in the inferior colliculus (IC) of valproic acid (VPA)-exposed rats. Pairing VNS with tones significantly reversed the degraded sound processing in the IC in VPA-exposed rats. The findings provide evidence that auditory processing in autism rat models can be improved through VNS.
Collapse
Affiliation(s)
- Yuko Tamaoki
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Varun Pasapula
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Tanya T Danaphongse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Alfonso R Reyes
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Collin R Chandler
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Michael S Borland
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Jonathan R Riley
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Alan M Carroll
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | - Crystal T Engineer
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, Texas, United States
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| |
Collapse
|
4
|
Ajay EA, Thompson AC, Azees AA, Wise AK, Grayden DB, Fallon JB, Richardson RT. Combined-electrical optogenetic stimulation but not channelrhodopsin kinetics improves the fidelity of high rate stimulation in the auditory pathway in mice. Sci Rep 2024; 14:21028. [PMID: 39251630 PMCID: PMC11385946 DOI: 10.1038/s41598-024-71712-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 08/30/2024] [Indexed: 09/11/2024] Open
Abstract
Novel stimulation methods are needed to overcome the limitations of contemporary cochlear implants. Optogenetics is a technique that confers light sensitivity to neurons via the genetic introduction of light-sensitive ion channels. By controlling neural activity with light, auditory neurons can be activated with higher spatial precision. Understanding the behaviour of opsins at high stimulation rates is an important step towards their translation. To elucidate this, we compared the temporal characteristics of auditory nerve and inferior colliculus responses to optogenetic, electrical, and combined optogenetic-electrical stimulation in virally transduced mice expressing one of two channelrhodopsins, ChR2-H134R or ChIEF, at stimulation rates up to 400 pulses per second (pps). At 100 pps, optogenetic responses in ChIEF mice demonstrated higher fidelity, less change in latency, and greater response stability compared to responses in ChR2-H134R mice, but not at higher rates. Combined stimulation improved the response characteristics in both cohorts at 400 pps, although there was no consistent facilitation of electrical responses. Despite these results, day-long stimulation (up to 13 h) led to severe and non-recoverable deterioration of the optogenetic responses. The results of this study have significant implications for the translation of optogenetic-only and combined stimulation techniques for hearing loss.
Collapse
Affiliation(s)
- Elise A Ajay
- Bionics Institute, Melbourne, Australia
- Department of Biomedical Engineering and Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - Alex C Thompson
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - Ajmal A Azees
- Bionics Institute, Melbourne, Australia
- Department of Electrical and Biomedical Engineering, RMIT, Melbourne, Australia
| | - Andrew K Wise
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - David B Grayden
- Bionics Institute, Melbourne, Australia
- Department of Biomedical Engineering and Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - James B Fallon
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - Rachael T Richardson
- Bionics Institute, Melbourne, Australia.
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia.
| |
Collapse
|
5
|
Tamaoki Y, Pasapula V, Chandler C, Borland MS, Olajubutu OI, Tharakan LS, Engineer CT. Degraded inferior colliculus responses to complex sounds in prenatally exposed VPA rats. J Neurodev Disord 2024; 16:2. [PMID: 38166599 PMCID: PMC10759431 DOI: 10.1186/s11689-023-09514-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Individuals with autism spectrum disorders (ASD) often exhibit altered sensory processing and deficits in language development. Prenatal exposure to valproic acid (VPA) increases the risk for ASD and impairs both receptive and expressive language. Like individuals with ASD, rodents prenatally exposed to VPA exhibit degraded auditory cortical processing and abnormal neural activity to sounds. Disrupted neuronal morphology has been documented in earlier processing areas of the auditory pathway in VPA-exposed rodents, but there are no studies documenting early auditory pathway physiology. Therefore, the objective of this study is to characterize inferior colliculus (IC) responses to different sounds in rats prenatally exposed to VPA compared to saline-exposed rats. METHODS In vivo extracellular multiunit recordings from the inferior colliculus were collected in response to tones, speech sounds, and noise burst trains. RESULTS Our results indicate that the overall response to speech sounds was degraded in VPA-exposed rats compared to saline-exposed controls, but responses to tones and noise burst trains were unaltered. CONCLUSIONS These results are consistent with observations in individuals with autism that neural responses to complex sounds, like speech, are often altered, and lays the foundation for future studies of potential therapeutics to improve auditory processing in the VPA rat model of ASD.
Collapse
Affiliation(s)
- Yuko Tamaoki
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA.
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA.
| | - Varun Pasapula
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Collin Chandler
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Michael S Borland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Olayinka I Olajubutu
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Liza S Tharakan
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| |
Collapse
|
6
|
Borland MS, Buell EP, Riley JR, Carroll AM, Moreno NA, Sharma P, Grasse KM, Buell JM, Kilgard MP, Engineer CT. Precise sound characteristics drive plasticity in the primary auditory cortex with VNS-sound pairing. Front Neurosci 2023; 17:1248936. [PMID: 37732302 PMCID: PMC10508341 DOI: 10.3389/fnins.2023.1248936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/22/2023] [Indexed: 09/22/2023] Open
Abstract
Introduction Repeatedly pairing a tone with vagus nerve stimulation (VNS) alters frequency tuning across the auditory pathway. Pairing VNS with speech sounds selectively enhances the primary auditory cortex response to the paired sounds. It is not yet known how altering the speech sounds paired with VNS alters responses. In this study, we test the hypothesis that the sounds that are presented and paired with VNS will influence the neural plasticity observed following VNS-sound pairing. Methods To explore the relationship between acoustic experience and neural plasticity, responses were recorded from primary auditory cortex (A1) after VNS was repeatedly paired with the speech sounds 'rad' and 'lad' or paired with only the speech sound 'rad' while 'lad' was an unpaired background sound. Results Pairing both sounds with VNS increased the response strength and neural discriminability of the paired sounds in the primary auditory cortex. Surprisingly, pairing only 'rad' with VNS did not alter A1 responses. Discussion These results suggest that the specific acoustic contrasts associated with VNS can powerfully shape neural activity in the auditory pathway. Methods to promote plasticity in the central auditory system represent a new therapeutic avenue to treat auditory processing disorders. Understanding how different sound contrasts and neural activity patterns shape plasticity could have important clinical implications.
Collapse
Affiliation(s)
- Michael S. Borland
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Elizabeth P. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Jonathan R. Riley
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Alan M. Carroll
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Nicole A. Moreno
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Pryanka Sharma
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Katelyn M. Grasse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
- Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - John M. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Michael P. Kilgard
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Crystal T. Engineer
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
7
|
Tamaoki Y, Pasapula V, Chandler C, Borland MS, Olajubutu OI, Tharakan LS, Engineer CT. Degraded inferior colliculus responses to complex sounds in prenatally exposed VPA rats. RESEARCH SQUARE 2023:rs.3.rs-3168097. [PMID: 37577524 PMCID: PMC10418539 DOI: 10.21203/rs.3.rs-3168097/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Background Individuals with autism spectrum disorders (ASD) often exhibit altered sensory processing and deficits in language development. Prenatal exposure to valproic acid (VPA) increases the risk for ASD and impairs both receptive and expressive language. Like individuals with ASD, rodents prenatally exposed to VPA exhibit degraded auditory cortical processing and abnormal neural activity to sounds. Disrupted neuronal morphology has been documented in earlier processing areas of the auditory pathway in VPA-exposed rodents, but there are no studies documenting early auditory pathway physiology. Therefore, the objective of this study is to characterize inferior colliculus (IC) responses to different sounds in rats prenatally exposed to VPA compared to saline-exposed rats. Methods Neural recordings from the inferior colliculus were collected in response to tones, speech sounds, and noise burst trains. Results Our results indicate that the overall response to speech sounds was degraded in VPA-exposed rats compared saline-exposed controls, but responses to tones and noise burst trains were unaltered. Conclusions These results are consistent with observations in individuals with autism that neural responses to complex sounds, like speech, are often altered, and lays the foundation for future studies of potential therapeutics to improve auditory processing in the VPA rat model of ASD.
Collapse
Affiliation(s)
- Yuko Tamaoki
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| | - Varun Pasapula
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| | - Collin Chandler
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| | - Michael S Borland
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| | | | - Liza S Tharakan
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| | - Crystal T Engineer
- The University of Texas at Dallas School of Behavioral and Brain Sciences
| |
Collapse
|
8
|
Ham J, Yoo HJ, Kim J, Lee B. Vowel speech recognition from rat electroencephalography using long short-term memory neural network. PLoS One 2022; 17:e0270405. [PMID: 35737731 PMCID: PMC9223328 DOI: 10.1371/journal.pone.0270405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/09/2022] [Indexed: 11/24/2022] Open
Abstract
Over the years, considerable research has been conducted to investigate the mechanisms of speech perception and recognition. Electroencephalography (EEG) is a powerful tool for identifying brain activity; therefore, it has been widely used to determine the neural basis of speech recognition. In particular, for the classification of speech recognition, deep learning-based approaches are in the spotlight because they can automatically learn and extract representative features through end-to-end learning. This study aimed to identify particular components that are potentially related to phoneme representation in the rat brain and to discriminate brain activity for each vowel stimulus on a single-trial basis using a bidirectional long short-term memory (BiLSTM) network and classical machine learning methods. Nineteen male Sprague-Dawley rats subjected to microelectrode implantation surgery to record EEG signals from the bilateral anterior auditory fields were used. Five different vowel speech stimuli were chosen, /a/, /e/, /i/, /o/, and /u/, which have highly different formant frequencies. EEG recorded under randomly given vowel stimuli was minimally preprocessed and normalized by a z-score transformation to be used as input for the classification of speech recognition. The BiLSTM network showed the best performance among the classifiers by achieving an overall accuracy, f1-score, and Cohen's κ values of 75.18%, 0.75, and 0.68, respectively, using a 10-fold cross-validation approach. These results indicate that LSTM layers can effectively model sequential data, such as EEG; hence, informative features can be derived through BiLSTM trained with end-to-end learning without any additional hand-crafted feature extraction methods.
Collapse
Affiliation(s)
- Jinsil Ham
- Department of Biomedical Science and Engineering (BMSE), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea
| | - Hyun-Joon Yoo
- Department of Physical Medicine and Rehabilitation, Korea University Anam Hospital, Korea University College of Medicine, Seoul, South Korea
| | - Jongin Kim
- Deepmedi Research Institute of Technology, Deepmedi Inc., Seoul, South Korea
| | - Boreom Lee
- Department of Biomedical Science and Engineering (BMSE), Gwangju Institute of Science and Technology (GIST), Gwangju, South Korea
| |
Collapse
|
9
|
Centanni TM, Beach SD, Ozernov-Palchik O, May S, Pantazis D, Gabrieli JDE. Categorical perception and influence of attention on neural consistency in response to speech sounds in adults with dyslexia. ANNALS OF DYSLEXIA 2022; 72:56-78. [PMID: 34495457 PMCID: PMC8901776 DOI: 10.1007/s11881-021-00241-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
Developmental dyslexia is a common neurodevelopmental disorder that is associated with alterations in the behavioral and neural processing of speech sounds, but the scope and nature of that association is uncertain. It has been proposed that more variable auditory processing could underlie some of the core deficits in this disorder. In the current study, magnetoencephalography (MEG) data were acquired from adults with and without dyslexia while they passively listened to or actively categorized tokens from a /ba/-/da/ consonant continuum. We observed no significant group difference in active categorical perception of this continuum in either of our two behavioral assessments. During passive listening, adults with dyslexia exhibited neural responses that were as consistent as those of typically reading adults in six cortical regions associated with auditory perception, language, and reading. However, they exhibited significantly less consistency in the left supramarginal gyrus, where greater inconsistency correlated significantly with worse decoding skills in the group with dyslexia. The group difference in the left supramarginal gyrus was evident only when neural data were binned with a high temporal resolution and was only significant during the passive condition. Interestingly, consistency significantly improved in both groups during active categorization versus passive listening. These findings suggest that adults with dyslexia exhibit typical levels of neural consistency in response to speech sounds with the exception of the left supramarginal gyrus and that this consistency increases during active versus passive perception of speech sounds similarly in the two groups.
Collapse
Affiliation(s)
- T M Centanni
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Psychology, Texas Christian University, Fort Worth, TX, USA.
| | - S D Beach
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| | - O Ozernov-Palchik
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - S May
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Boston College, Boston, MA, USA
| | - D Pantazis
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - J D E Gabrieli
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
10
|
Riley JR, Borland MS, Tamaoki Y, Skipton SK, Engineer CT. Auditory Brainstem Responses Predict Behavioral Deficits in Rats with Varying Levels of Noise-Induced Hearing Loss. Neuroscience 2021; 477:63-75. [PMID: 34634426 DOI: 10.1016/j.neuroscience.2021.10.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 09/30/2021] [Accepted: 10/04/2021] [Indexed: 11/30/2022]
Abstract
Intense noise exposure is a leading cause of hearing loss, which results in degraded speech sound discrimination ability, particularly in noisy environments. The development of an animal model of speech discrimination deficits due to noise induced hearing loss (NIHL) would enable testing of potential therapies to improve speech sound processing. Rats can accurately detect and discriminate human speech sounds in the presence of quiet and background noise. Further, it is known that profound hearing loss results in functional deafness in rats. In this study, we generated rats with a range of impairments which model the large range of hearing impairments observed in patients with NIHL. One month after noise exposure, we stratified rats into three distinct deficit groups based on their auditory brainstem response (ABR) thresholds. These groups exhibited markedly different behavioral outcomes across a range of tasks. Rats with moderate hearing loss (30 dB shifts in ABR threshold) were not impaired in speech sound detection or discrimination. Rats with severe hearing loss (55 dB shifts) were impaired at discriminating speech sounds in the presence of background noise. Rats with profound hearing loss (70 dB shifts) were unable to detect and discriminate speech sounds above chance level performance. Across groups, ABR threshold accurately predicted behavioral performance on all tasks. This model of long-term impaired speech discrimination in noise, demonstrated by the severe group, mimics the most common clinical presentation of NIHL and represents a useful tool for developing and improving interventions to target restoration of hearing.
Collapse
Affiliation(s)
- Jonathan R Riley
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA.
| | - Michael S Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Yuko Tamaoki
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Samantha K Skipton
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| | - Crystal T Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX 75080, USA
| |
Collapse
|
11
|
Mi L, Wang L, Li X, She S, Li H, Huang H, Zhang J, Liu Y, Zhao J, Ning Y, Zheng Y. Reduction of phonetic mismatch negativity may depict illness course and predict functional outcomes in schizophrenia. J Psychiatr Res 2021; 137:290-297. [PMID: 33735719 DOI: 10.1016/j.jpsychires.2021.02.065] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 01/05/2021] [Accepted: 02/26/2021] [Indexed: 01/16/2023]
Abstract
Schizophrenia (SZ) is characterized by a series of cognitive impairments, including automatic processing impairment of basic auditory information, indexed by mismatch negativity (MMN). Existing studies mainly focus on MMN induced by deviant of single acoustic features, and relatively few studies have focused on complex acoustic stimuli, especially speech-induced MMN. Many cognitive impairments in SZ are related to speech function. Thus, the present study aimed to examine the reduction of phonetic MMN in SZ as a potential biomarker and its relationship with illness course and functional outcomes. Electroencephalogram (EEG) signals were recorded from 32 SZ and 32 healthy controls (HC) in a double oddball paradigm, with /da/ as the standard stimulus and /ba/ and /du/ as the deviant stimuli. MMN was computed for vowel and consonant deviants separately. Clinical symptoms were assessed using the Positive and Negative Symptom Rating Scale (PANSS). Illness duration and illness relapse were acquired by combining clinical interviews and electronic medical records. Functional outcomes were assessed using the Global Assessment of Functioning scale (GAF). Compared with HC, SZ showed lower amplitudes of phonetic MMN, especially for vowel deviants. In addition, the MMN amplitude of the vowel deviant was significantly correlated with illness duration, illness relapse, and functional outcomes among patients with SZ. These findings indicate that the pre-attentive automatic phonetic processing of SZ was impaired for both consonants and vowels, while the vowel processing deficit may be the key speech processing deficit in SZ, which could depict the illness course and predict the functional outcomes.
Collapse
Affiliation(s)
- Lin Mi
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Le Wang
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Xuanzi Li
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Shenglin She
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Haijing Li
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Huiyan Huang
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Jinfang Zhang
- School of Psychology, South China Normal University, Guangzhou, 510631, China
| | - Yi Liu
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China
| | - Jingping Zhao
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China; Mental Health Institute of the Second Xiangya Hospital, Central South University, Chinese National Clinical Research Center on Mental Disorders, Chinese National Technology Institute on Mental Disorders, Hunan Key Laboratory of Psychiatry and Mental Health, Changsha, Hunan, 410011, China
| | - Yuping Ning
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China; The First School of Clinical Medicine, Southern Medical University, Guangzhou, Guangdong, 510515, China.
| | - Yingjun Zheng
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, 510370, China.
| |
Collapse
|
12
|
Toro JM, Crespo-Bojorque P. Arc-shaped pitch contours facilitate item recognition in non-human animals. Cognition 2021; 213:104614. [PMID: 33558018 DOI: 10.1016/j.cognition.2021.104614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 01/11/2021] [Accepted: 01/26/2021] [Indexed: 10/22/2022]
Abstract
Acoustic changes linked to natural prosody are a key source of information about the organization of language. Both human infants and adults readily take advantage of such changes to discover and memorize linguistic patterns. Do they so because our brain is efficiently wired to specifically process linguistic stimuli? Or are we co-opting for language acquisition purposes more general principles that might be inherited from our animal ancestors? Here, we address this question by exploring if other species profit from prosody to better process acoustic sequences. More specifically, we test whether arc-shaped pitch contours defining natural prosody might facilitate item recognition and memorization in rats. In two experiments, we presented to the rats nonsense words with flat, natural, inverted and random prosodic contours. We observed that the animals correctly recognized the familiarization words only when arc-shaped pitch contours were implemented over them. Our results suggest that other species might also benefit from prosody for the memorization of items in a sequence. Such capacity seems to be rooted in general principles of how biological sounds are produced and processed.
Collapse
Affiliation(s)
- Juan M Toro
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Pg. Lluis Companys, 23, 08019 Barcelona, Spain; Universitat Pompeu Fabra, C. Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain.
| | | |
Collapse
|
13
|
Egorova MA, Akimov AG, Khorunzhii GD, Ehret G. Frequency response areas of neurons in the mouse inferior colliculus. III. Time-domain responses: Constancy, dynamics, and precision in relation to spectral resolution, and perception in the time domain. PLoS One 2020; 15:e0240853. [PMID: 33104718 PMCID: PMC7588072 DOI: 10.1371/journal.pone.0240853] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 11/23/2022] Open
Abstract
The auditory midbrain (central nucleus of inferior colliculus, ICC) receives multiple brainstem projections and recodes auditory information for perception in higher centers. Many neural response characteristics are represented in gradients (maps) in the three-dimensional ICC space. Map overlap suggests that neurons, depending on their ICC location, encode information in several domains simultaneously by different aspects of their responses. Thus, interdependence of coding, e.g. in spectral and temporal domains, seems to be a general ICC principle. Studies on covariation of response properties and possible impact on sound perception are, however, rare. Here, we evaluated tone-evoked single neuron activity from the mouse ICC and compared shapes of excitatory frequency-response areas (including strength and shape of inhibition within and around the excitatory area; classes I, II, III) with types of temporal response patterns and first-spike response latencies. Analyses showed covariation of sharpness of frequency tuning with constancy and precision of responding to tone onsets. Highest precision (first-spike latency jitter < 1 ms) and stable phasic responses throughout frequency-response areas were the quality mainly of class III neurons with broad frequency tuning, least influenced by inhibition. Class II neurons with narrow frequency tuning and dominating inhibitory influence were unsuitable for time domain coding with high precision. The ICC center seems specialized rather for high spectral resolution (class II presence), lateral parts for constantly precise responding to sound onsets (class III presence). Further, the variation of tone-response latencies in the frequency-response areas of individual neurons with phasic, tonic, phasic-tonic, or pauser responses gave rise to the definition of a core area, which represented a time window of about 20 ms from tone onset for tone-onset responding of the whole ICC. This time window corresponds to the roughly 20 ms shortest time interval that was found critical in several auditory perceptual tasks in humans and mice.
Collapse
Affiliation(s)
- Marina A. Egorova
- Sechenov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Alexander G. Akimov
- Sechenov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Gleb D. Khorunzhii
- Sechenov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Günter Ehret
- Institute of Neurobiology, University of Ulm, Ulm, Germany
- * E-mail:
| |
Collapse
|
14
|
The role of linguistic experience in the development of the consonant bias. Anim Cogn 2020; 24:419-431. [PMID: 33052544 DOI: 10.1007/s10071-020-01436-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 09/18/2020] [Accepted: 09/26/2020] [Indexed: 10/23/2022]
Abstract
Consonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203-230, 2003). Several factors modulate infants' development of the consonant bias, including fine-grained temporal processing ability and native language exposure [for review, see Nazzi et al. (Curr Direct Psychol Sci 25:291-296, 2016)]. A rat model demonstrated that mature fine-grained temporal processing alone cannot account for consonant bias emergence; linguistic exposure is also necessary Bouchon and Toro (An Cog 22:839-850, 2019). This study tested domestic dogs, who have similarly fine-grained temporal processing but more language exposure than rats, to assess whether a minimal lexicon and small degree of regular linguistic exposure can allow for consonant bias development. Dogs demonstrated a vowel bias rather than a consonant bias, preferring their own name over a vowel-mispronounced version of their name, but not in comparison to a consonant-mispronounced version. This is the pattern seen in young infants Bouchon et al. (Dev Sci 18:587-598, 2015) and rats Bouchon et al. (An Cog 22:839-850, 2019). In a follow-up study, dogs treated a consonant-mispronounced version of their name similarly to their actual name, further suggesting that dogs do not treat consonant differences as meaningful for word identity. These results support the findings from Bouchon and Toro (An Cog 2:839-850, 2019), suggesting that there may be a default preference for vowel information over consonant information when identifying word forms, and that the consonant bias may be a human-exclusive tool for language learning.
Collapse
|
15
|
Adcock KS, Chandler C, Buell EP, Solorzano BR, Loerwald KW, Borland MS, Engineer CT. Vagus nerve stimulation paired with tones restores auditory processing in a rat model of Rett syndrome. Brain Stimul 2020; 13:1494-1503. [PMID: 32800964 DOI: 10.1016/j.brs.2020.08.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 07/26/2020] [Accepted: 08/07/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Rett syndrome is a rare neurological disorder associated with a mutation in the X-linked gene MECP2. This disorder mainly affects females, who typically have seemingly normal early development followed by a regression of acquired skills. The rodent Mecp2 model exhibits many of the classic neural abnormalities and behavioral deficits observed in individuals with Rett syndrome. Similar to individuals with Rett syndrome, both auditory discrimination ability and auditory cortical responses are impaired in heterozygous Mecp2 rats. The development of therapies that can enhance plasticity in auditory networks and improve auditory processing has the potential to impact the lives of individuals with Rett syndrome. Evidence suggests that precisely timed vagus nerve stimulation (VNS) paired with sound presentation can drive robust neuroplasticity in auditory networks and enhance the benefits of auditory therapy. OBJECTIVE The aim of this study was to investigate the ability of VNS paired with tones to restore auditory processing in Mecp2 transgenic rats. METHODS Seventeen female heterozygous Mecp2 rats and 8 female wild-type (WT) littermates were used in this study. The rats were exposed to multiple tone frequencies paired with VNS 300 times per day for 20 days. Auditory cortex responses were then examined following VNS-tone pairing therapy or no therapy. RESULTS Our results indicate that Mecp2 mutation alters auditory cortex responses to sounds compared to WT controls. VNS-tone pairing in Mecp2 rats improves the cortical response strength to both tones and speech sounds compared to untreated Mecp2 rats. Additionally, VNS-tone pairing increased the information contained in the neural response that can be used to discriminate between different consonant sounds. CONCLUSION These results demonstrate that VNS-sound pairing may represent a strategy to enhance auditory function in individuals with Rett syndrome.
Collapse
Affiliation(s)
- Katherine S Adcock
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Collin Chandler
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, Erik Jonsson School of Engineering and Computer Science, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Elizabeth P Buell
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Bleyda R Solorzano
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Kristofer W Loerwald
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Michael S Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA
| | - Crystal T Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, School of Behavioral and Brain Sciences, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA; The University of Texas at Dallas, Erik Jonsson School of Engineering and Computer Science, 800 West Campbell Road BSB11, Richardson, TX, 75080, USA.
| |
Collapse
|
16
|
Tang K, DeMille MMC, Frijters JC, Gruen JR. DCDC2 READ1 regulatory element: how temporal processing differences may shape language. Proc Biol Sci 2020; 287:20192712. [PMID: 32486976 PMCID: PMC7341942 DOI: 10.1098/rspb.2019.2712] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Classic linguistic theory ascribes language change and diversity to population migrations, conquests, and geographical isolation, with the assumption that human populations have equivalent language processing abilities. We hypothesize that spectral and temporal characteristics make some consonant manners vulnerable to differences in temporal precision associated with specific population allele frequencies. To test this hypothesis, we modelled association between RU1-1 alleles of DCDC2 and manner of articulation in 51 populations spanning five continents, and adjusting for geographical proximity, and genetic and linguistic relatedness. RU1-1 alleles, acting through increased expression of DCDC2, appear to increase auditory processing precision that enhances stop-consonant discrimination, favouring retention in some populations and loss by others. These findings enhance classical linguistic theories by adding a genetic dimension, which until recently, has not been considered to be a significant catalyst for language change.
Collapse
Affiliation(s)
- Kevin Tang
- Department of Linguistics, University of Florida, Gainesville, FL 32611-5454, USA
| | - Mellissa M C DeMille
- Department of Pediatrics, Yale University School of Medicine, New Haven, CT 06520, USA
| | - Jan C Frijters
- Child and Youth Studies, Brock University, St. Catherine's, Ontario, Canada L2S 3A1
| | - Jeffrey R Gruen
- Department of Pediatrics, Yale University School of Medicine, New Haven, CT 06520, USA.,Department of Genetics, Yale University School of Medicine, New Haven, CT 06520, USA
| |
Collapse
|
17
|
O’Sullivan C, Weible AP, Wehr M. Disruption of Early or Late Epochs of Auditory Cortical Activity Impairs Speech Discrimination in Mice. Front Neurosci 2020; 13:1394. [PMID: 31998064 PMCID: PMC6965026 DOI: 10.3389/fnins.2019.01394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 12/10/2019] [Indexed: 11/22/2022] Open
Abstract
Speech evokes robust activity in auditory cortex, which contains information over a wide range of spatial and temporal scales. It remains unclear which components of these neural representations are causally involved in the perception and processing of speech sounds. Here we compared the relative importance of early and late speech-evoked activity for consonant discrimination. We trained mice to discriminate the initial consonants in spoken words, and then tested the effect of optogenetically suppressing different temporal windows of speech-evoked activity in auditory cortex. We found that both early and late suppression disrupted performance equivalently. These results suggest that mice are impaired at recognizing either type of disrupted representation because it differs from those learned in training.
Collapse
Affiliation(s)
- Conor O’Sullivan
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Biology, University of Oregon, Eugene, OR, United States
| | - Aldis P. Weible
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Psychology, University of Oregon, Eugene, OR, United States
- *Correspondence: Michael Wehr,
| |
Collapse
|
18
|
Burghard A, Voigt MB, Kral A, Hubka P. Categorical processing of fast temporal sequences in the guinea pig auditory brainstem. Commun Biol 2019; 2:265. [PMID: 31341964 PMCID: PMC6642126 DOI: 10.1038/s42003-019-0472-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 05/23/2019] [Indexed: 11/21/2022] Open
Abstract
Discrimination of temporal sequences is crucial for auditory object recognition, phoneme categorization and speech understanding. The present study shows that auditory brainstem responses (ABR) to pairs of noise bursts separated by a short gap can be classified into two distinct groups based on the ratio of gap duration to initial noise burst duration in guinea pigs. If this ratio was smaller than 0.5, the ABR to the trailing noise burst was strongly suppressed. On the other hand, if the initial noise burst duration was short compared to the gap duration (a ratio greater than 0.5), a release from suppression and/or enhancement of the trailing ABR was observed. Consequently, initial noise bursts of shorter duration caused a faster transition between response classes than initial noise bursts of longer duration. We propose that the described findings represent a neural correlate of subcortical categorical preprocessing of temporal sequences in the auditory system.
Collapse
Affiliation(s)
- Alice Burghard
- Institute of Audioneurotechnology & Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, D-30625 Germany
- Department of Neuroscience, University of Connecticut Health Center, Farmington, CT 06030 USA
| | - Mathias Benjamin Voigt
- Institute of Audioneurotechnology & Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, D-30625 Germany
| | - Andrej Kral
- Institute of Audioneurotechnology & Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, D-30625 Germany
| | - Peter Hubka
- Institute of Audioneurotechnology & Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, D-30625 Germany
| |
Collapse
|
19
|
Borland MS, Vrana WA, Moreno NA, Fogarty EA, Buell EP, Vanneste S, Kilgard MP, Engineer CT. Pairing vagus nerve stimulation with tones drives plasticity across the auditory pathway. J Neurophysiol 2019; 122:659-671. [PMID: 31215351 DOI: 10.1152/jn.00832.2018] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Previous studies have demonstrated that pairing vagus nerve stimulation (VNS) with sounds can enhance the primary auditory cortex (A1) response to the paired sound. The neural response to sounds following VNS-sound pairing in other subcortical and cortical auditory fields has not been documented. We predicted that VNS-tone pairing would increase neural responses to the paired tone frequency across the auditory pathway. In this study, we paired VNS with the presentation of a 9-kHz tone 300 times a day for 20 days. We recorded neural responses to tones from 2,950 sites in the inferior colliculus (IC), A1, anterior auditory field (AAF), and posterior auditory field (PAF) 24 h after the last pairing session in anesthetized rats. We found that VNS-tone pairing increased the percentage of IC, A1, AAF, and PAF that responds to the paired tone frequency. Across all tested auditory fields, the response strength to tones was strengthened in VNS-tone paired rats compared with control rats. VNS-tone pairing reduced spontaneous activity, frequency selectivity, and response threshold across the auditory pathway. This is the first study to document both cortical and subcortical plasticity following VNS-sound pairing. Our findings suggest that VNS paired with sound presentation is an effective method to enhance auditory processing.NEW & NOTEWORTHY Previous studies have reported primary auditory cortex plasticity following vagus nerve stimulation (VNS) paired with a sound. This study extends previous findings by documenting that fields across the auditory pathway are altered by VNS-tone pairing. VNS-tone pairing increases the percentage of each field that responds to the paired tone frequency. This is the first study to document both cortical and subcortical plasticity following VNS-sound pairing.
Collapse
Affiliation(s)
- Michael S Borland
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Will A Vrana
- The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Nicole A Moreno
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Elizabeth A Fogarty
- The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Elizabeth P Buell
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Sven Vanneste
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Michael P Kilgard
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| | - Crystal T Engineer
- The University of Texas at Dallas, Texas Biomedical Device Center, Richardson, Texas.,The University of Texas at Dallas, School of Behavioral and Brain Sciences, Richardson, Texas
| |
Collapse
|
20
|
Is the consonant bias specifically human? Long-Evans rats encode vowels better than consonants in words. Anim Cogn 2019; 22:839-850. [PMID: 31222546 DOI: 10.1007/s10071-019-01280-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 05/21/2019] [Accepted: 06/11/2019] [Indexed: 10/26/2022]
Abstract
In natural languages, vowels tend to convey structures (syntax, prosody) while consonants are more important lexically. The consonant bias, which is the tendency to rely more on consonants than on vowels to process words, is well attested in human adults and infants after the first year of life. Is the consonant bias based on evolutionarily ancient mechanisms, potentially present in other species? The current study investigated this issue in a species phylogenetically distant from humans: Long-Evans rats. During training, the animals were presented with four natural word-forms (e.g., mano, "hand"). We then compared their responses to novel words carrying either a consonant (pano) or a vowel change (meno). Results show that the animals were less disrupted by consonantal alterations than by vocalic alterations of words. That is, word recognition was more affected by the alteration of a vowel than a consonant. Together with previous findings in very young human infants, this reliance on vocalic information we observe in rats suggests that the emergence of the consonant bias may require a combination of vocal, cognitive and auditory skills that rodents do not seem to possess.
Collapse
|
21
|
Steadman MA, Sumner CJ. Changes in Neuronal Representations of Consonants in the Ascending Auditory System and Their Role in Speech Recognition. Front Neurosci 2018; 12:671. [PMID: 30369863 PMCID: PMC6194309 DOI: 10.3389/fnins.2018.00671] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 09/06/2018] [Indexed: 11/25/2022] Open
Abstract
A fundamental task of the ascending auditory system is to produce representations that facilitate the recognition of complex sounds. This is particularly challenging in the context of acoustic variability, such as that between different talkers producing the same phoneme. These representations are transformed as information is propagated throughout the ascending auditory system from the inner ear to the auditory cortex (AI). Investigating these transformations and their role in speech recognition is key to understanding hearing impairment and the development of future clinical interventions. Here, we obtained neural responses to an extensive set of natural vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, in three stages of the auditory processing pathway. Auditory nerve (AN) representations were simulated using a model of the peripheral auditory system and extracellular neuronal activity was recorded in the inferior colliculus (IC) and primary auditory cortex (AI) of anaesthetized guinea pigs. A classifier was developed to examine the efficacy of these representations for recognizing the speech sounds. Individual neurons convey progressively less information from AN to AI. Nonetheless, at the population level, representations are sufficiently rich to facilitate recognition of consonants with a high degree of accuracy at all stages indicating a progression from a dense, redundant representation to a sparse, distributed one. We examined the timescale of the neural code for consonant recognition and found that optimal timescales increase throughout the ascending auditory system from a few milliseconds in the periphery to several tens of milliseconds in the cortex. Despite these longer timescales, we found little evidence to suggest that representations up to the level of AI become increasingly invariant to across-talker differences. Instead, our results support the idea that the role of the subcortical auditory system is one of dimensionality expansion, which could provide a basis for flexible classification of arbitrary speech sounds.
Collapse
Affiliation(s)
- Mark A. Steadman
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Christian J. Sumner
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
22
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
23
|
White-Schwoch T, Nicol T, Warrier CM, Abrams DA, Kraus N. Individual Differences in Human Auditory Processing: Insights From Single-Trial Auditory Midbrain Activity in an Animal Model. Cereb Cortex 2018; 27:5095-5115. [PMID: 28334187 DOI: 10.1093/cercor/bhw293] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 08/29/2016] [Indexed: 11/13/2022] Open
Abstract
Auditory-evoked potentials are classically defined as the summations of synchronous firing along the auditory neuraxis. Converging evidence supports a model whereby timing jitter in neural coding compromises listening and causes variable scalp-recorded potentials. Yet the intrinsic noise of human scalp recordings precludes a full understanding of the biological origins of individual differences in listening skills. To delineate the mechanisms contributing to these phenomena, in vivo extracellular activity was recorded from inferior colliculus in guinea pigs to speech in quiet and noise. Here we show that trial-by-trial timing jitter is a mechanism contributing to auditory response variability. Identical variability patterns were observed in scalp recordings in human children, implicating jittered timing as a factor underlying reduced coding of dynamic speech features and speech in noise. Moreover, intertrial variability in human listeners is tied to language development. Together, these findings suggest that variable timing in inferior colliculus blurs the neural coding of speech in noise, and propose a consequence of this timing jitter for human behavior. These results hint both at the mechanisms underlying speech processing in general, and at what may go awry in individuals with listening difficulties.
Collapse
Affiliation(s)
- Travis White-Schwoch
- Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu) & Department of Communication Sciences, Northwestern University, Evanston, IL, 60208, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu) & Department of Communication Sciences, Northwestern University, Evanston, IL, 60208, USA
| | - Catherine M Warrier
- Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu) & Department of Communication Sciences, Northwestern University, Evanston, IL, 60208, USA
| | - Daniel A Abrams
- Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu) & Department of Communication Sciences, Northwestern University, Evanston, IL, 60208, USA.,Stanford Cognitive & Systems Neuroscience Laboratory, Stanford University, Palo Alto, CA, 94304, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu) & Department of Communication Sciences, Northwestern University, Evanston, IL, 60208, USA.,Department of Neurobiology & Physiology, Northwestern University, Evanston, IL, 60208, USA.,Department of Otolaryngology, Northwestern University, Chicago, IL, 60611, USA
| |
Collapse
|
24
|
Brainstem-cortical functional connectivity for speech is differentially challenged by noise and reverberation. Hear Res 2018; 367:149-160. [PMID: 29871826 DOI: 10.1016/j.heares.2018.05.018] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 05/18/2018] [Accepted: 05/23/2018] [Indexed: 11/21/2022]
Abstract
Everyday speech perception is challenged by external acoustic interferences that hinder verbal communication. Here, we directly compared how different levels of the auditory system (brainstem vs. cortex) code speech and how their neural representations are affected by two acoustic stressors: noise and reverberation. We recorded multichannel (64 ch) brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) simultaneously in normal hearing individuals to speech sounds presented in mild and moderate levels of noise and reverb. We matched signal-to-noise and direct-to-reverberant ratios to equate the severity between classes of interference. Electrode recordings were parsed into source waveforms to assess the relative contribution of region-specific brain areas [i.e., brainstem (BS), primary auditory cortex (A1), inferior frontal gyrus (IFG)]. Results showed that reverberation was less detrimental to (and in some cases facilitated) the neural encoding of speech compared to additive noise. Inter-regional correlations revealed associations between BS and A1 responses, suggesting subcortical speech representations influence higher auditory-cortical areas. Functional connectivity analyses further showed that directed signaling toward A1 in both feedforward cortico-collicular (BS→A1) and feedback cortico-cortical (IFG→A1) pathways were strong predictors of degraded speech perception and differentiated "good" vs. "poor" perceivers. Our findings demonstrate a functional interplay within the brain's speech network that depends on the form and severity of acoustic interference. We infer that in addition to the quality of neural representations within individual brain regions, listeners' success at the "cocktail party" is modulated based on how information is transferred among subcortical and cortical hubs of the auditory-linguistic network.
Collapse
|
25
|
Worldwide distribution of the DCDC2 READ1 regulatory element and its relationship with phoneme variation across languages. Proc Natl Acad Sci U S A 2018; 115:4951-4956. [PMID: 29666269 PMCID: PMC5948951 DOI: 10.1073/pnas.1710472115] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Languages evolve rapidly due to an interaction between sociocultural factors and underlying phonological processes that are influenced by genetic factors. DCDC2 has been strongly associated with core components of the phonological processing system in animal models and multiple independent studies of populations and languages. To characterize subtle language differences arising from genetic variants associated with phonological processes, we examined the relationship between READ1, a regulatory element in DCDC2, and phonemes in languages of 43 populations across five continents. Variation in READ1 was significantly correlated with the number of consonants. Our results suggest that subtle cognitive biases conferred by different READ1 alleles are amplified through cultural transmission that shape consonant use by populations over time. DCDC2 is a gene strongly associated with components of the phonological processing system in animal models and in multiple independent studies of populations and languages. We propose that it may also influence population-level variation in language component usage. To test this hypothesis, we investigated the evolution and worldwide distribution of the READ1 regulatory element within DCDC2, and compared its distribution with variation in different language properties. The mutational history of READ1 was estimated by examining primate and archaic hominin sequences. This identified duplication and expansion events, which created a large number of polymorphic alleles based on internal repeat units (RU1 and RU2). Association of READ1 alleles was studied with respect to the numbers of consonants and vowels for languages in 43 human populations distributed across five continents. Using population-based approaches with multivariate ANCOVA and linear mixed effects analyses, we found that the RU1-1 allele group of READ1 is significantly associated with the number of consonants within languages independent of genetic relatedness, geographic proximity, and language family. We propose that allelic variation in READ1 helped create a subtle cognitive bias that was amplified by cultural transmission, and ultimately shaped consonant use by different populations over time.
Collapse
|
26
|
Abstract
Categorical effects are found across speech sound categories, with the degree of these effects ranging from extremely strong categorical perception in consonants to nearly continuous perception in vowels. We show that both strong and weak categorical effects can be captured by a unified model. We treat speech perception as a statistical inference problem, assuming that listeners use their knowledge of categories as well as the acoustics of the signal to infer the intended productions of the speaker. Simulations show that the model provides close fits to empirical data, unifying past findings of categorical effects in consonants and vowels and capturing differences in the degree of categorical effects through a single parameter.
Collapse
|
27
|
Engineer CT, Rahebi KC, Borland MS, Buell EP, Im KW, Wilson LG, Sharma P, Vanneste S, Harony-Nicolas H, Buxbaum JD, Kilgard MP. Shank3-deficient rats exhibit degraded cortical responses to sound. Autism Res 2017; 11:59-68. [PMID: 29052348 DOI: 10.1002/aur.1883] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 09/25/2017] [Accepted: 10/02/2017] [Indexed: 02/06/2023]
Abstract
Individuals with SHANK3 mutations have severely impaired receptive and expressive language abilities. While brain responses are known to be abnormal in these individuals, the auditory cortex response to sound has remained largely understudied. In this study, we document the auditory cortex response to speech and non-speech sounds in the novel Shank3-deficient rat model. We predicted that the auditory cortex response to sounds would be impaired in Shank3-deficient rats. We found that auditory cortex responses were weaker in Shank3 heterozygous rats compared to wild-type rats. Additionally, Shank3 heterozygous responses had less spontaneous auditory cortex firing and were unable to respond well to rapid trains of noise bursts. The rat model of the auditory impairments in SHANK3 mutation could be used to test potential rehabilitation or drug therapies to improve the communication impairments observed in individuals with Phelan-McDermid syndrome. Autism Res 2018, 11: 59-68. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY Individuals with SHANK3 mutations have severely impaired language abilities, yet the auditory cortex response to sound has remained largely understudied. In this study, we found that auditory cortex responses were weaker and were unable to respond well to rapid sounds in Shank3-deficient rats compared to control rats. The rat model of the auditory impairments in SHANK3 mutation could be used to test potential rehabilitation or drug therapies to improve the communication impairments observed in individuals with Phelan-McDermid syndrome.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Michael S Borland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Kwok W Im
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Linda G Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Pryanka Sharma
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Sven Vanneste
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| | - Hala Harony-Nicolas
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Joseph D Buxbaum
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY.,Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY.,Fishberg Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY.,Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY.,The Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080.,Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX, 75080
| |
Collapse
|
28
|
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams. J Neurosci 2017; 36:4895-906. [PMID: 27122044 DOI: 10.1523/jneurosci.4202-15.2016] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 03/29/2016] [Indexed: 01/04/2023] Open
Abstract
UNLABELLED Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population.
Collapse
|
29
|
Henry KS, Abrams KS, Forst J, Mender MJ, Neilans EG, Idrobo F, Carney LH. Midbrain Synchrony to Envelope Structure Supports Behavioral Sensitivity to Single-Formant Vowel-Like Sounds in Noise. J Assoc Res Otolaryngol 2017; 18:165-181. [PMID: 27766433 PMCID: PMC5243265 DOI: 10.1007/s10162-016-0594-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Accepted: 10/05/2016] [Indexed: 11/24/2022] Open
Abstract
Vowels make a strong contribution to speech perception under natural conditions. Vowels are encoded in the auditory nerve primarily through neural synchrony to temporal fine structure and to envelope fluctuations rather than through average discharge rate. Neural synchrony is thought to contribute less to vowel coding in central auditory nuclei, consistent with more limited synchronization to fine structure and the emergence of average-rate coding of envelope fluctuations. However, this hypothesis is largely unexplored, especially in background noise. The present study examined coding mechanisms at the level of the midbrain that support behavioral sensitivity to simple vowel-like sounds using neurophysiological recordings and matched behavioral experiments in the budgerigar. Stimuli were harmonic tone complexes with energy concentrated at one spectral peak, or formant frequency, presented in quiet and in noise. Behavioral thresholds for formant-frequency discrimination decreased with increasing amplitude of stimulus envelope fluctuations, increased in noise, and were similar between budgerigars and humans. Multiunit recordings in awake birds showed that the midbrain encodes vowel-like sounds both through response synchrony to envelope structure and through average rate. Whereas neural discrimination thresholds based on either coding scheme were sufficient to support behavioral thresholds in quiet, only synchrony-based neural thresholds could account for behavioral thresholds in background noise. These results reveal an incomplete transformation to average-rate coding of vowel-like sounds in the midbrain. Model simulations suggest that this transformation emerges due to modulation tuning, which is shared between birds and mammals. Furthermore, the results underscore the behavioral relevance of envelope synchrony in the midbrain for detection of small differences in vowel formant frequency under real-world listening conditions.
Collapse
Affiliation(s)
- Kenneth S. Henry
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14642 USA
| | - Kristina S. Abrams
- Department of Neuroscience, University of Rochester, Rochester, NY 14642 USA
| | - Johanna Forst
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14642 USA
| | - Matthew J. Mender
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14642 USA
| | | | - Fabio Idrobo
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215 USA
- Universidad de Los Andes, Bogotá, Colombia
| | - Laurel H. Carney
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14642 USA
- Department of Neuroscience, University of Rochester, Rochester, NY 14642 USA
| |
Collapse
|
30
|
Engineer CT, Shetake JA, Engineer ND, Vrana WA, Wolf JT, Kilgard MP. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds. Brain Stimul 2017; 10:543-552. [PMID: 28131520 DOI: 10.1016/j.brs.2017.01.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 11/22/2016] [Accepted: 01/10/2017] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. OBJECTIVE/HYPOTHESIS We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. METHODS VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. RESULTS Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. CONCLUSION This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States.
| | - Jai A Shetake
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Navzer D Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; MicroTransponder Inc., 2802 Flintrock Trace Suite 225, Austin, TX 78738, United States
| | - Will A Vrana
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Jordan T Wolf
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road BSB11, Richardson, TX 75080, United States
| |
Collapse
|
31
|
Lee B, Cho KH. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference. Sci Rep 2016; 6:37647. [PMID: 27876875 PMCID: PMC5120313 DOI: 10.1038/srep37647] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 10/28/2016] [Indexed: 11/18/2022] Open
Abstract
Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintain high recognition performance under any circumstance? Recent neurophysiological studies have suggested that the phase of neuronal oscillations in the auditory cortex contributes to accurate speech recognition by guiding speech segmentation into smaller units at different timescales. A phase-locked relationship between neuronal oscillation and the speech envelope has recently been obtained, which suggests that the speech envelope provides a foundation for multi-timescale speech segmental information. In this study, we quantitatively investigated the role of the speech envelope as a potential temporal reference to segment speech using its instantaneous phase information. We evaluated the proposed approach by the achieved information gain and recognition performance in various noisy environments. The results indicate that the proposed segmentation scheme not only extracts more information from speech but also provides greater robustness in a recognition test.
Collapse
Affiliation(s)
- Byeongwook Lee
- Laboratory for Systems Biology and Bio-inspired Engineering, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Kwang-Hyun Cho
- Laboratory for Systems Biology and Bio-inspired Engineering, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| |
Collapse
|
32
|
Toro JM, Hoeschele M. Generalizing prosodic patterns by a non-vocal learning mammal. Anim Cogn 2016; 20:179-185. [PMID: 27658675 PMCID: PMC5306188 DOI: 10.1007/s10071-016-1036-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 09/08/2016] [Accepted: 09/09/2016] [Indexed: 11/30/2022]
Abstract
Prosody, a salient aspect of speech that includes rhythm and intonation, has been shown to help infants acquire some aspects of syntax. Recent studies have shown that birds of two vocal learning species are able to categorize human speech stimuli based on prosody. In the current study, we found that the non-vocal learning rat could also discriminate human speech stimuli based on prosody. Not only that, but rats were able to generalize to novel stimuli they had not been trained with, which suggests that they had not simply memorized the properties of individual stimuli, but learned a prosodic rule. When tested with stimuli with either one or three out of the four prosodic cues removed, the rats did poorly, suggesting that all cues were necessary for the rats to solve the task. This result is in contrast to results with humans and budgerigars, both of which had previously been studied using the same paradigm. Humans and budgerigars both learned the task and generalized to novel items, but were also able to solve the task with some of the cues removed. In conclusion, rats appear to have some of the perceptual abilities necessary to generalize prosodic patterns, in a similar though not identical way to the vocal learning species that have been studied.
Collapse
Affiliation(s)
- Juan M Toro
- ICREA, Pg. Lluis Companys 23, 08019, Barcelona, Spain.,Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat, 138, 08018, Barcelona, Spain
| | - Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090, Vienna, Austria.
| |
Collapse
|
33
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
34
|
Pannese A, Grandjean D, Frühholz S. Subcortical processing in auditory communication. Hear Res 2015; 328:67-77. [DOI: 10.1016/j.heares.2015.07.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Revised: 06/23/2015] [Accepted: 07/01/2015] [Indexed: 12/21/2022]
|
35
|
Engineer CT, Rahebi KC, Borland MS, Buell EP, Centanni TM, Fink MK, Im KW, Wilson LG, Kilgard MP. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome. Neurobiol Dis 2015; 83:26-34. [PMID: 26321676 DOI: 10.1016/j.nbd.2015.08.019] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 07/31/2015] [Accepted: 08/19/2015] [Indexed: 10/23/2022] Open
Abstract
Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Michael S Borland
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Tracy M Centanni
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Kwok W Im
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Linda G Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, United States
| |
Collapse
|
36
|
Behavioral and neural discrimination of speech sounds after moderate or intense noise exposure in rats. Ear Hear 2015; 35:e248-61. [PMID: 25072238 DOI: 10.1097/aud.0000000000000062] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. DESIGN Sixteen female Sprague-Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. RESULTS Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. CONCLUSIONS These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies.
Collapse
|
37
|
Occelli F, Suied C, Pressnitzer D, Edeline JM, Gourévitch B. A Neural Substrate for Rapid Timbre Recognition? Neural and Behavioral Discrimination of Very Brief Acoustic Vowels. Cereb Cortex 2015; 26:2483-2496. [PMID: 25947234 DOI: 10.1093/cercor/bhv071] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The timbre of a sound plays an important role in our ability to discriminate between behaviorally relevant auditory categories, such as different vowels in speech. Here, we investigated, in the primary auditory cortex (A1) of anesthetized guinea pigs, the neural representation of vowels with impoverished timbre cues. Five different vowels were presented with durations ranging from 2 to 128 ms. A psychophysical experiment involving human listeners showed that identification performance was near ceiling for the longer durations and degraded close to chance level for the shortest durations. This was likely due to spectral splatter, which reduced the contrast between the spectral profiles of the vowels at short durations. Effects of vowel duration on cortical responses were well predicted by the linear frequency responses of A1 neurons. Using mutual information, we found that auditory cortical neurons in the guinea pig could be used to reliably identify several vowels for all durations. Information carried by each cortical site was low on average, but the population code was accurate even for durations where human behavioral performance was poor. These results suggest that a place population code is available at the level of A1 to encode spectral profile cues for even very short sounds.
Collapse
Affiliation(s)
- F Occelli
- UMR CNRS 9197, Institut de NeuroScience Paris-Saclay (NeuroPSI)
- Université Paris-Sud, Institut de NeuroScience Paris-Saclay (NeuroPSI) 91405 Orsay Cedex, France
| | - C Suied
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées, 91223 Brétigny sur Orge, France
| | - D Pressnitzer
- UMR CNRS 8248, LSP
- DEC, LSP Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France
| | - J-M Edeline
- UMR CNRS 9197, Institut de NeuroScience Paris-Saclay (NeuroPSI)
- Université Paris-Sud, Institut de NeuroScience Paris-Saclay (NeuroPSI) 91405 Orsay Cedex, France
| | - B Gourévitch
- UMR CNRS 9197, Institut de NeuroScience Paris-Saclay (NeuroPSI)
- Université Paris-Sud, Institut de NeuroScience Paris-Saclay (NeuroPSI) 91405 Orsay Cedex, France
| |
Collapse
|
38
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
39
|
Bidelman GM, Alain C. Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept. Neuropsychologia 2015; 68:38-50. [DOI: 10.1016/j.neuropsychologia.2014.12.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 10/24/2022]
|
40
|
Engineer CT, Engineer ND, Riley JR, Seale JD, Kilgard MP. Pairing Speech Sounds With Vagus Nerve Stimulation Drives Stimulus-specific Cortical Plasticity. Brain Stimul 2015; 8:637-44. [PMID: 25732785 DOI: 10.1016/j.brs.2015.01.408] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Revised: 12/17/2014] [Accepted: 01/19/2015] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Individuals with communication disorders, such as aphasia, exhibit weak auditory cortex responses to speech sounds and language impairments. Previous studies have demonstrated that pairing vagus nerve stimulation (VNS) with tones or tone trains can enhance both the spectral and temporal processing of sounds in auditory cortex, and can be used to reverse pathological primary auditory cortex (A1) plasticity in a rodent model of chronic tinnitus. OBJECTIVE/HYPOTHESIS We predicted that pairing VNS with speech sounds would strengthen the A1 response to the paired speech sounds. METHODS The speech sounds 'rad' and 'lad' were paired with VNS three hundred times per day for twenty days. A1 responses to both paired and novel speech sounds were recorded 24 h after the last VNS pairing session in anesthetized rats. Response strength, latency and neurometric decoding were compared between VNS speech paired and control rats. RESULTS Our results show that VNS paired with speech sounds strengthened the auditory cortex response to the paired sounds, but did not strengthen the amplitude of the response to novel speech sounds. Responses to the paired sounds were faster and less variable in VNS speech paired rats compared to control rats. Neural plasticity that was specific to the frequency, intensity, and temporal characteristics of the paired speech sounds resulted in enhanced neural detection. CONCLUSION VNS speech sound pairing provides a novel method to enhance speech sound processing in the central auditory system. Delivery of VNS during speech therapy could improve outcomes in individuals with receptive language deficits.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA.
| | - Navzer D Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; MicroTransponder Inc., 2802 Flintrock Trace Suite 225, Austin, TX 78738, USA
| | - Jonathan R Riley
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Jonathan D Seale
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road EC39, Richardson, TX 75080, USA
| |
Collapse
|
41
|
Banerjee A, Engineer CT, Sauls BL, Morales AA, Kilgard MP, Ploski JE. Abnormal emotional learning in a rat model of autism exposed to valproic acid in utero. Front Behav Neurosci 2014; 8:387. [PMID: 25429264 PMCID: PMC4228846 DOI: 10.3389/fnbeh.2014.00387] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 10/17/2014] [Indexed: 01/30/2023] Open
Abstract
Autism Spectrum Disorders (ASD) are complex neurodevelopmental disorders characterized by repetitive behavior and impaired social communication and interactions. Apart from these core symptoms, a significant number of ASD individuals display higher levels of anxiety and some ASD individuals exhibit impaired emotional learning. We therefore sought to further examine anxiety and emotional learning in an environmentally induced animal model of ASD that utilizes the administration of the known teratogen, valproic acid (VPA) during gestation. Specifically we exposed dams to one of two different doses of VPA (500 and 600 mg/kg) or vehicle on day 12.5 of gestation and examined the resultant progeny. Our data indicate that animals exposed to VPA in utero exhibit enhanced anxiety in the open field test and normal object recognition memory compared to control animals. Animals exposed to 500 mg/kg of VPA displayed normal acquisition of auditory fear conditioning, and exhibited reduced extinction of fear memory and normal litter survival rates as compared to control animals. We observed that animals exposed to 600 mg/kg of VPA exhibited a significant reduction in the acquisition of fear conditioning, a significant reduction in social interaction and a significant reduction in litter survival rates as compared to control animals. VPA (600 mg/kg) exposed animals exhibited similar shock sensitivity and hearing as compared to control animals indicating the fear conditioning deficit observed in these animals was not likely due to sensory deficits, but rather due to deficits in learning or memory retrieval. In conclusion, considering that progeny from dams exposed to rather similar doses of VPA exhibit striking differences in emotional learning, the VPA model may serve as a useful tool to explore the molecular and cellular mechanisms that contribute to not only ASD, but also emotional learning.
Collapse
Affiliation(s)
- Anwesha Banerjee
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| | - Crystal T Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| | - Bethany L Sauls
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| | - Anna A Morales
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| | - Jonathan E Ploski
- School of Behavioral and Brain Sciences, University of Texas at Dallas Richardson, TX, USA
| |
Collapse
|
42
|
Understanding Neural Population Coding: Information Theoretic Insights from the Auditory System. ACTA ACUST UNITED AC 2014. [DOI: 10.1155/2014/907851] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In recent years, our research in computational neuroscience has focused on understanding how populations of neurons encode naturalistic stimuli. In particular, we focused on how populations of neurons use the time domain to encode sensory information. In this focused review, we summarize this recent work from our laboratory. We focus in particular on the mathematical methods that we developed for the quantification of how information is encoded by populations of neurons and on how we used these methods to investigate the encoding of complex naturalistic sounds in auditory cortex. We review how these methods revealed a complementary role of low frequency oscillations and millisecond precise spike patterns in encoding complex sounds and in making these representations robust to imprecise knowledge about the timing of the external stimulus. Further, we discuss challenges in extending this work to understand how large populations of neurons encode sensory information. Overall, this previous work provides analytical tools and conceptual understanding necessary to study the principles of how neural populations reflect sensory inputs and achieve a stable representation despite many uncertainties in the environment.
Collapse
|
43
|
Durante AS, Wieselberg MB, Carvalho S, Costa N, Pucci B, Gudayol N, Almeida KD. Cortical Auditory Evoked Potential: evaluation of speech detection in adult hearing aid users. Codas 2014; 26:367-73. [DOI: 10.1590/2317-1782/20142013085] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 07/04/2014] [Indexed: 11/22/2022] Open
Abstract
Purpose:To analyze the presence of auditory cortical potential and its correlation with psychoacoustic detection of speech sounds as well as the latency of the P1, N1 e P2 components presented in free field in hearing impaired adults with and without amplification.Methods:We evaluated 22 adults with moderate to severe symmetrical bilateral sensorineural hearing loss, regular users of bilateral hearing aids. Speech sounds of low (/m/), medium (/g/) and high (/t/) frequencies were presented in sound field in decreasing intensities of 75, 65 and of 55 dBSPL in free field with and without hearing aids. The used equipment performs automatic statistical detection of the presence of response; forthermore, the latencies of waves P1, N1 e P2 were labeled and the psychoacoustic perception was registered.Results:The results demonstrated the increased presence of cortical response with hearing aids. We observed the correlation between psychoacoustic perception and automatic detection of 91% for the sounds /g/ and /t/ and ranged from 73 to 86% for the sound /m/. The averages of latencies P1-P2-N1 decreased with both increasing intensity and the use of hearing aids for the three sounds. The differences were significant for the sounds /g/ and /t/ in comparison with and without hearing aids.Conclusion:There was increase in the presence of cortical auditory evoked potential with hearing aids. Automatic detection of cortical response provided with hearing aids showed 91% agreement with the psychoacoustic perception of the speech signal. In the analysis of latency measures of the P1, N1 and P2 components, it was observed a decrease with the increase of the signal intensity and the use of amplification for the three speech stimuli /m/, /g/ and /t/.
Collapse
|
44
|
Engineer CT, Centanni TM, Im KW, Kilgard MP. Speech sound discrimination training improves auditory cortex responses in a rat model of autism. Front Syst Neurosci 2014; 8:137. [PMID: 25140133 PMCID: PMC4122159 DOI: 10.3389/fnsys.2014.00137] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 07/14/2014] [Indexed: 11/28/2022] Open
Abstract
Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.
Collapse
Affiliation(s)
- Crystal T Engineer
- Cortical Plasticity Laboratory, School of Behavioral and Brain Sciences, The University of Texas at Dallas Richardson, TX, USA
| | - Tracy M Centanni
- Cortical Plasticity Laboratory, School of Behavioral and Brain Sciences, The University of Texas at Dallas Richardson, TX, USA
| | - Kwok W Im
- Cortical Plasticity Laboratory, School of Behavioral and Brain Sciences, The University of Texas at Dallas Richardson, TX, USA
| | - Michael P Kilgard
- Cortical Plasticity Laboratory, School of Behavioral and Brain Sciences, The University of Texas at Dallas Richardson, TX, USA
| |
Collapse
|
45
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Kilgard MP. Speech training alters tone frequency tuning in rat primary auditory cortex. Behav Brain Res 2014; 258:166-78. [PMID: 24344364 DOI: 10.1016/j.bbr.2013.10.021] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing.
Collapse
|
46
|
Centanni TM, Chen F, Booker AM, Engineer CT, Sloan AM, Rennaker RL, LoTurco JJ, Kilgard MP. Speech sound processing deficits and training-induced neural plasticity in rats with dyslexia gene knockdown. PLoS One 2014; 9:e98439. [PMID: 24871331 PMCID: PMC4037188 DOI: 10.1371/journal.pone.0098439] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2013] [Accepted: 05/02/2014] [Indexed: 11/18/2022] Open
Abstract
In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments.
Collapse
Affiliation(s)
- Tracy M. Centanni
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Fuyi Chen
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Anne M. Booker
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Crystal T. Engineer
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Andrew M. Sloan
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Robert L. Rennaker
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| | - Joseph J. LoTurco
- Physiology and Neurobiology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Michael P. Kilgard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas, United States of America
| |
Collapse
|
47
|
Degraded speech sound processing in a rat model of fragile X syndrome. Brain Res 2014; 1564:72-84. [PMID: 24713347 DOI: 10.1016/j.brainres.2014.03.049] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 03/29/2014] [Accepted: 03/31/2014] [Indexed: 12/29/2022]
Abstract
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies.
Collapse
|
48
|
Engineer CT, Centanni TM, Im KW, Borland MS, Moreno NA, Carraway RS, Wilson LG, Kilgard MP. Degraded auditory processing in a rat model of autism limits the speech representation in non-primary auditory cortex. Dev Neurobiol 2014; 74:972-86. [PMID: 24639033 DOI: 10.1002/dneu.22175] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 02/17/2014] [Accepted: 03/07/2014] [Indexed: 01/22/2023]
Abstract
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism.
Collapse
Affiliation(s)
- C T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas, 75080
| | | | | | | | | | | | | | | |
Collapse
|
49
|
Centanni TM, Sloan AM, Reed AC, Engineer CT, Rennaker RL, Kilgard MP. Detection and identification of speech sounds using cortical activity patterns. Neuroscience 2014; 258:292-306. [PMID: 24286757 PMCID: PMC3898816 DOI: 10.1016/j.neuroscience.2013.11.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 11/14/2013] [Accepted: 11/15/2013] [Indexed: 10/26/2022]
Abstract
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance and without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/s), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech-processing disorders.
Collapse
Affiliation(s)
| | - A M Sloan
- University of Texas at Dallas, United States
| | - A C Reed
- University of Texas at Dallas, United States
| | | | | | - M P Kilgard
- University of Texas at Dallas, United States
| |
Collapse
|
50
|
Abstract
The encoding of sensory information by populations of cortical neurons forms the basis for perception but remains poorly understood. To understand the constraints of cortical population coding we analyzed neural responses to natural sounds recorded in auditory cortex of primates (Macaca mulatta). We estimated stimulus information while varying the composition and size of the considered population. Consistent with previous reports we found that when choosing subpopulations randomly from the recorded ensemble, the average population information increases steadily with population size. This scaling was explained by a model assuming that each neuron carried equal amounts of information, and that any overlap between the information carried by each neuron arises purely from random sampling within the stimulus space. However, when studying subpopulations selected to optimize information for each given population size, the scaling of information was strikingly different: a small fraction of temporally precise cells carried the vast majority of information. This scaling could be explained by an extended model, assuming that the amount of information carried by individual neurons was highly nonuniform, with few neurons carrying large amounts of information. Importantly, these optimal populations can be determined by a single biophysical marker-the neuron's encoding time scale-allowing their detection and readout within biologically realistic circuits. These results show that extrapolations of population information based on random ensembles may overestimate the population size required for stimulus encoding, and that sensory cortical circuits may process information using small but highly informative ensembles.
Collapse
|