1
|
Ealer C, Niemczak CE, Nicol T, Magohe A, Bonacina S, Zhang Z, Rieke AuD C, Leigh S, Kobrina A, Lichtenstein J, Massawe ER, Kraus N, Buckey JC. Auditory neural processing in children living with HIV uncovers underlying central nervous system dysfunction. AIDS 2024; 38:289-298. [PMID: 37905994 PMCID: PMC10841987 DOI: 10.1097/qad.0000000000003771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
OBJECTIVE Central nervous system (CNS) damage from HIV infection or treatment can lead to developmental delays and poor educational outcomes in children living with HIV (CLWH). Early markers of central nervous system dysfunction are needed to target interventions and prevent life-long disability. The frequency following response (FFR) is an auditory electrophysiology test that can reflect the health of the central nervous system. In this study, we explore whether the FFR reveals auditory central nervous system dysfunction in CLWH. STUDY DESIGN Cross-sectional analysis of an ongoing cohort study. Data were from the child's first visit in the study. SETTING The infectious disease center in Dar es Salaam, Tanzania. METHODS We collected the FFR from 151 CLWH and 151 HIV-negative children. To evoke the FFR, three speech syllabi (/da/, /ba/, /ga/) were played monaurally to the child's right ear. Response measures included neural timing (peak latencies), strength of frequency encoding (fundamental frequency and first formant amplitude), encoding consistency (inter-response consistency), and encoding precision (stimulus-to-response correlation). RESULTS CLWH showed smaller first formant amplitudes ( P < 0.0001), weaker inter-response consistencies ( P < 0.0001) and smaller stimulus to response correlations ( P < 0.0001) than FFRs from HIV-negative children. These findings generalized across the three speech stimuli with moderately strong effect sizes (partial η2 ranged from 0.061 to 0.094). CONCLUSION The FFR shows auditory central nervous system dysfunction in CLWH. Neural encoding of auditory stimuli was less robust, more variable, and less accurate. As the FFR is a passive and objective test, it may offer an effective way to assess and detect central nervous system function in CLWH.
Collapse
Affiliation(s)
- Christin Ealer
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Christopher E. Niemczak
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
- Department of Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Albert Magohe
- Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Silvia Bonacina
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Ziyin Zhang
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Catherine Rieke AuD
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Samantha Leigh
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Anastasiya Kobrina
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Jonathan Lichtenstein
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
- The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Enica R. Massawe
- Muhimbili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Neurobiology and Otolaryngology, Northwestern University, Evanston, Illinois
| | - Jay C. Buckey
- Space Medicine Innovations Laboratory, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
- Department of Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| |
Collapse
|
2
|
Easwar V, Peng ZE, Mak V, Mikiel-Hunter J. Differences between children and adults in the neural encoding of voice fundamental frequency in the presence of noise and reverberation. Eur J Neurosci 2023. [PMID: 37203275 DOI: 10.1111/ejn.16049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 04/15/2023] [Accepted: 05/12/2023] [Indexed: 05/20/2023]
Abstract
Environmental noise and reverberation challenge speech understanding more significantly in children than in adults. However, the neural/sensory basis for the difference is poorly understood. We evaluated the impact of noise and reverberation on the neural processing of the fundamental frequency of voice (f0 )-an important cue to tag or recognize a speaker. In a group of 39 6-15-year-old children and 26 adults with normal hearing, envelope following responses (EFRs) were elicited by a male-spoken/i/in quiet, noise, reverberation, and both noise and reverberation. Due to increased resolvability of harmonics at lower than higher vowel formants that may affect susceptibility to noise and/or reverberation, the/i/was modified to elicit two EFRs: one initiated by the low frequency first formant (F1) and the other initiated by mid to high frequency second and higher formants (F2+) with predominantly resolved and unresolved harmonics, respectively. F1 EFRs were more susceptible to noise whereas F2+ EFRs were more susceptible to reverberation. Reverberation resulted in greater attenuation of F1 EFRs in adults than children, and greater attenuation of F2+ EFRs in older than younger children. Reduced modulation depth caused by reverberation and noise explained changes in F2+ EFRs but was not the primary determinant for F1 EFRs. Experimental data paralleled modelled EFRs, especially for F1. Together, data suggest that noise or reverberation influences the robustness of f0 encoding depending on the resolvability of vowel harmonics, and that maturation of processing temporal/envelope information of voice is delayed in reverberation, particularly for low frequency stimuli.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders
- Waisman Center, University of Wisconsin-Madison, USA
- National Acoustic Laboratories, Sydney, Australia
- Macquarie University, Sydney, Australia
| | - Z Ellen Peng
- Waisman Center, University of Wisconsin-Madison, USA
- Boys Town National Research Hospital, Omaha, USA
| | - Veronika Mak
- Waisman Center, University of Wisconsin-Madison, USA
| | | |
Collapse
|
3
|
Omidvar S, Duquette-Laplante F, Bursch C, Jutras B, Koravand A. Assessing Auditory Processing in Children with Listening Difficulties: A Pilot Study. J Clin Med 2023; 12:jcm12030897. [PMID: 36769544 PMCID: PMC9917704 DOI: 10.3390/jcm12030897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Auditory processing disorders (APD) may be one of the problems experienced by children with listening difficulties (LiD). The combination of auditory behavioural and electrophysiological tests could help to provide a better understanding of the abilities/disabilities of children with LiD. The current study aimed to quantify the auditory processing abilities and function in children with LiD. METHODS Twenty children, ten with LiD (age = 8.46; SD = 1.39) and ten typically developing (TD) (age = 9.45; SD = 1.57) participated in this study. All children were evaluated with auditory processing tests as well as with attention and phonemic synthesis tasks. Electrophysiological measures were also conducted with click and speech auditory brainstem responses (ABR). RESULTS Children with LiD performed significantly worse than TD children for most behavioural tasks, indicating shortcomings in functional auditory processing. Moreover, the click-ABR wave I amplitude was smaller, and the speech-ABR waves D and E latencies were longer for the LiD children compared to the results of TD children. No significant difference was found when evaluating neural correlates between groups. CONCLUSIONS Combining behavioural testing with click-ABR and speech-ABR can highlight functional and neurophysiological deficiencies in children with learning and listening issues, especially at the brainstem level.
Collapse
Affiliation(s)
- Shaghayegh Omidvar
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
| | - Fauve Duquette-Laplante
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
| | | | - Benoît Jutras
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
- Research Centre, CHU Sainte-Justine, Montreal, QC H3T 1C5, Canada
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- Correspondence:
| |
Collapse
|
4
|
Ananthakrishnan S, McElree C, Martin L. Physiological and perceptual auditory consequences of hunting-related recreational firearm noise exposure in young adults with normal hearing sensitivity. Noise Health 2023; 25:8-35. [PMID: 37006114 DOI: 10.4103/nah.nah_53_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2023] Open
Abstract
Purpose The objective of the current study was to describe outcomes on physiological and perceptual measures of auditory function in human listeners with and without a history of recreational firearm noise exposure related to hunting. Design This study assessed the effects of hunting-related recreational firearm noise exposure on audiometric thresholds, oto-acoustic emissions (OAEs), brainstem neural representation of fundamental frequency (F0) in frequency following responses (FFRs), tonal middle-ear muscle reflex (MEMR) thresholds, and behavioral tests of auditory processing in 20 young adults with normal hearing sensitivity. Results Performance on both physiological (FFR, MEMR) and perceptual (behavioral auditory processing tests) measures of auditory function were largely similar across participants, regardless of hunting-related recreational noise exposure. On both behavioral and neural measures including different listening conditions, performance degraded as difficulty of listening condition increased for both nonhunter and hunter participants. A right-ear advantage was observed in tests of dichotic listening for both nonhunter and hunter participants. Conclusion The null results in the current study could reflect an absence of cochlear synaptopathy in the participating cohort, variability related to participant characteristics and/or test protocols, or an insensitivity of the selected physiological and behavioral auditory measures to noise-induced synaptopathy.
Collapse
|
5
|
Easwar V, Purcell D, Wright T. Predicting Hearing aid Benefit Using Speech-Evoked Envelope Following Responses in Children With Hearing Loss. Trends Hear 2023; 27:23312165231151468. [PMID: 36946195 PMCID: PMC10034298 DOI: 10.1177/23312165231151468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 12/24/2022] [Accepted: 12/30/2022] [Indexed: 03/23/2023] Open
Abstract
Electroencephalography could serve as an objective tool to evaluate hearing aid benefit in infants who are developmentally unable to participate in hearing tests. We investigated whether speech-evoked envelope following responses (EFRs), a type of electroencephalography-based measure, could predict improved audibility with the use of a hearing aid in children with mild-to-severe permanent, mainly sensorineural, hearing loss. In 18 children, EFRs were elicited by six male-spoken band-limited phonemic stimuli--the first formants of /u/ and /i/, the second and higher formants of /u/ and /i/, and the fricatives /s/ and /∫/--presented together as /su∫i/. EFRs were recorded between the vertex and nape, when /su∫i/ was presented at 55, 65, and 75 dB SPL using insert earphones in unaided conditions and individually fit hearing aids in aided conditions. EFR amplitude and detectability improved with the use of a hearing aid, and the degree of improvement in EFR amplitude was dependent on the extent of change in behavioral thresholds between unaided and aided conditions. EFR detectability was primarily influenced by audibility; higher sensation level stimuli had an increased probability of detection. Overall EFR sensitivity in predicting audibility was significantly higher in aided (82.1%) than unaided conditions (66.5%) and did not vary as a function of stimulus or frequency. EFR specificity in ascertaining inaudibility was 90.8%. Aided improvement in EFR detectability was a significant predictor of hearing aid-facilitated change in speech discrimination accuracy. Results suggest that speech-evoked EFRs could be a useful objective tool in predicting hearing aid benefit in children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
- National
Acoustic Laboratories, Macquarie
University, Sydney, New South Wales, Australia
| | - David Purcell
- School of Communication Sciences and Disorders,
Western
University, London, Canada
- National Centre for Audiology, Western
University, London, Canada
| | - Trevor Wright
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
| |
Collapse
|
6
|
Richardson ML, Guérit F, Gransier R, Wouters J, Carlyon RP, Middlebrooks JC. Temporal Pitch Sensitivity in an Animal Model: Psychophysics and Scalp Recordings : Temporal Pitch Sensitivity in Cat. J Assoc Res Otolaryngol 2022; 23:491-512. [PMID: 35668206 PMCID: PMC9437162 DOI: 10.1007/s10162-022-00849-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 04/11/2022] [Indexed: 01/28/2023] Open
Abstract
Cochlear implant (CI) users show limited sensitivity to the temporal pitch conveyed by electric stimulation, contributing to impaired perception of music and of speech in noise. Neurophysiological studies in cats suggest that this limitation is due, in part, to poor transmission of the temporal fine structure (TFS) by the brainstem pathways that are activated by electrical cochlear stimulation. It remains unknown, however, how that neural limit might influence perception in the same animal model. For that reason, we developed non-invasive psychophysical and electrophysiological measures of temporal (i.e., non-spectral) pitch processing in the cat. Normal-hearing (NH) cats were presented with acoustic pulse trains consisting of band-limited harmonic complexes that simulated CI stimulation of the basal cochlea while removing cochlear place-of-excitation cues. In the psychophysical procedure, trained cats detected changes from a base pulse rate to a higher pulse rate. In the scalp-recording procedure, the cortical-evoked acoustic change complex (ACC) and brainstem-generated frequency following response (FFR) were recorded simultaneously in sedated cats for pulse trains that alternated between the base and higher rates. The range of perceptual sensitivity to temporal pitch broadly resembled that of humans but was shifted to somewhat higher rates. The ACC largely paralleled these perceptual patterns, validating its use as an objective measure of temporal pitch sensitivity. The phase-locked FFR, in contrast, showed strong brainstem encoding for all tested pulse rates. These measures demonstrate the cat's perceptual sensitivity to pitch in the absence of cochlear-place cues and may be valuable for evaluating neural mechanisms of temporal pitch perception in the feline animal model of stimulation by a CI or novel auditory prostheses.
Collapse
Affiliation(s)
- Matthew L Richardson
- Department of Otolaryngology, Center for Hearing Research, University of California at Irvine, Irvine, CA, USA.
| | - François Guérit
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Robin Gransier
- Department of Neurosciences, ExpORL, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, ExpORL, KU Leuven, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - John C Middlebrooks
- Department of Otolaryngology, Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
- Departments of Neurobiology & Behavior, Biomedical Engineering, Cognitive Sciences, University of California at Irvine, Irvine, CA, USA
| |
Collapse
|
7
|
Easwar V, Chung L. The influence of phoneme contexts on adaptation in vowel-evoked envelope following responses. Eur J Neurosci 2022; 56:4572-4582. [PMID: 35804282 PMCID: PMC9543495 DOI: 10.1111/ejn.15768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 02/25/2022] [Accepted: 07/06/2022] [Indexed: 11/28/2022]
Abstract
Repeated stimulus presentation leads to neural adaptation and consequent amplitude reduction in vowel-evoked envelope following responses (EFRs)-a response that reflects neural activity phase-locked to envelope periodicity. EFRs are elicited by vowels presented in isolation or in the context of other phonemes such as in syllables. While context phonemes could exert some forward influence on vowel-evoked EFRs, they may reduce the degree of adaptation. Here, we evaluated whether the properties of context phonemes between consecutive vowel stimuli influence adaptation. EFRs were elicited by the low-frequency first formant (resolved harmonics) and mid-to-high frequency second and higher formants (unresolved harmonics) of a male-spoken/i/when the presence, number, and predictability of context phonemes (/s/, /a/, /∫/, /u/) between vowel repetitions varied. Monitored over four iterations of /i/, adaptation was evident only for EFRs elicited by the unresolved harmonics. EFRs elicited by the unresolved harmonics decreased in amplitude by ~16-20 nV (10-17%) after the first presentation of/i/and remained stable thereafter. EFR adaptation was reduced by the presence of a context phoneme, but the reduction did not change with their number or predictability. The presence of a context phoneme, however, attenuated EFRs by a degree similar to that caused by adaptation (~21-23 nV). Such a trade-off in the short- and long-term influence of context phonemes suggests that the benefit of interleaving EFR-eliciting vowels with other context phonemes depends on whether the use of consonant-vowel syllables is critical to improve the validity of EFR applications.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences & Disorders, University of Wisconsin-Madison, Madison, USA.,Waisman Center, University of Wisconsin-Madison, Madison, USA
| | - Lauren Chung
- Department of Communication Sciences & Disorders, University of Wisconsin-Madison, Madison, USA.,Waisman Center, University of Wisconsin-Madison, Madison, USA
| |
Collapse
|
8
|
Abstract
OBJECTIVES To evaluate sensation level (SL)-dependent characteristics of envelope following responses (EFRs) elicited by band-limited speech dominant in low, mid, and high frequencies. DESIGN In 21 young normal hearing adults, EFRs were elicited by 8 male-spoken speech stimuli-the first formant, and second and higher formants of /u/, /a/ and /i/, and modulated fricatives, /∫/ and /s/. Stimulus SL was computed from behaviorally measured thresholds. RESULTS At 30 dB SL, the amplitude and phase coherence of fricative-elicited EFRs were ~1.5 to 2 times higher than all vowel-elicited EFRs, whereas fewer and smaller differences were found among vowel-elicited EFRs. For all stimuli, EFR amplitude and phase coherence increased by roughly 50% for every 10 dB increase in SL between ~0 and 50 dB. CONCLUSIONS Stimulus and frequency dependency in EFRs exist despite accounting for differences in audibility of speech sounds. The growth rate of EFR characteristics with SL is independent of stimulus and its frequency.
Collapse
|
9
|
Abstract
OBJECTIVES The present study aimed to (1) evaluate the accuracy of envelope following responses (EFRs) in predicting speech audibility as a function of the statistical indicator used for objective response detection, stimulus phoneme, frequency, and level, and (2) quantify the minimum sensation level (SL; stimulus level above behavioral threshold) needed for detecting EFRs. DESIGN In 21 participants with normal hearing, EFRs were elicited by 8 band-limited phonemes in the male-spoken token /susa∫i/ (2.05 sec) presented between 20 and 65 dB SPL in 15 dB increments. Vowels in /susa∫i/ were modified to elicit two EFRs simultaneously by selectively lowering the fundamental frequency (f0) in the first formant (F1) region. The modified vowels elicited one EFR from the low-frequency F1 and another from the mid-frequency second and higher formants (F2+). Fricatives were amplitude-modulated at the average f0. EFRs were extracted from single-channel EEG recorded between the vertex (Cz) and the nape of the neck when /susa∫i/ was presented monaurally for 450 sweeps. The performance of the three statistical indicators, F-test, Hotelling's T, and phase coherence, was compared against behaviorally determined audibility (estimated SL, SL ≥0 dB = audible) using area under the receiver operating characteristics (AUROC) curve, sensitivity (the proportion of audible speech with a detectable EFR [true positive rate]), and specificity (the proportion of inaudible speech with an undetectable EFR [true negative rate]). The influence of stimulus phoneme, frequency, and level on the accuracy of EFRs in predicting speech audibility was assessed by comparing sensitivity, specificity, positive predictive value (PPV; the proportion of detected EFRs elicited by audible stimuli) and negative predictive value (NPV; the proportion of undetected EFRs elicited by inaudible stimuli). The minimum SL needed for detection was evaluated using a linear mixed-effects model with the predictor variables stimulus and EFR detection p value. RESULTS of the 3 statistical indicators were similar; however, at the type I error rate of 5%, the sensitivities of Hotelling's T (68.4%) and phase coherence (68.8%) were significantly higher than the F-test (59.5%). In contrast, the specificity of the F-test (97.3%) was significantly higher than the Hotelling's T (88.4%). When analyzed using Hotelling's T as a function of stimulus, fricatives offered higher sensitivity (88.6 to 90.6%) and NPV (57.9 to 76.0%) compared with most vowel stimuli (51.9 to 71.4% and 11.6 to 51.3%, respectively). When analyzed as a function of frequency band (F1, F2+, and fricatives aggregated as low-, mid- and high-frequencies, respectively), high-frequency stimuli offered the highest sensitivity (96.9%) and NPV (88.9%). When analyzed as a function of test level, sensitivity improved with increases in stimulus level (99.4% at 65 dB SPL). The minimum SL for EFR detection ranged between 13.4 and 21.7 dB for F1 stimuli, 7.8 to 12.2 dB for F2+ stimuli, and 2.3 to 3.9 dB for fricative stimuli. CONCLUSIONS EFR-based inference of speech audibility requires consideration of the statistical indicator used, phoneme, stimulus frequency, and stimulus level.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, USA
- National Centre for Audiology, Western University, Canada
| | - Jen Birstler
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, USA
| | - Adrienne Harrison
- Health and Rehabilitation Sciences, Western University, Canada
- School of Communication Sciences and Disorders, Western University, Canada
| | - Susan Scollie
- National Centre for Audiology, Western University, Canada
- School of Communication Sciences and Disorders, Western University, Canada
| | - David Purcell
- National Centre for Audiology, Western University, Canada
- School of Communication Sciences and Disorders, Western University, Canada
| |
Collapse
|
10
|
Abstract
OBJECTIVE The medial olivocochlear (MOC) reflex provides efferent feedback from the brainstem to cochlear outer hair cells. Physiologic studies have demonstrated that the MOC reflex is involved in "unmasking" of signals-in-noise at the level of the auditory nerve; however, its functional importance in human hearing remains unclear. DESIGN This study examined relationships between pre-neural measurements of MOC reflex strength (click-evoked otoacoustic emission inhibition; CEOAE) and neural measurements of speech-in-noise encoding (speech frequency following response; sFFR) in four conditions (Quiet, Contralateral Noise, Ipsilateral Noise, and Ipsilateral + Contralateral Noise). Three measures of CEOAE inhibition (amplitude reduction, effective attenuation, and input-output slope inhibition) were used to quantify pre-neural MOC reflex strength. Correlations between pre-neural MOC reflex strength and sFFR "unmasking" (i.e. response recovery from masking effects with activation of the MOC reflex in time and frequency domains) were assessed. STUDY SAMPLE 18 young adults with normal hearing. RESULTS sFFR unmasking effects were insignificant, and there were no correlations between pre-neural MOC reflex strength and sFFR unmasking in the time or frequency domain. CONCLUSION Our results do not support the hypothesis that the MOC reflex is involved in speech-in-noise neural encoding, at least for features that are represented in the sFFR at the SNR tested.
Collapse
Affiliation(s)
- S B Smith
- Department of Communication Sciences and Disorders, University of Texas at Austin, Austin, TX, USA
| | - B Cone
- Department of Speech, Language, and Hearing Sciences, University of Arizona, Tucson, AZ, USA
| |
Collapse
|
11
|
Easwar V, Scollie S, Lasarev M, Urichuk M, Aiken SJ, Purcell DW. Characteristics of Speech-Evoked Envelope Following Responses in Infancy. Trends Hear 2021; 25:23312165211004331. [PMID: 34251887 PMCID: PMC8278440 DOI: 10.1177/23312165211004331] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 02/04/2021] [Accepted: 03/01/2021] [Indexed: 11/21/2022] Open
Abstract
Envelope following responses (EFRs) may be a useful tool for evaluating the audibility of speech sounds in infants. The present study aimed to evaluate the characteristics of speech-evoked EFRs in infants with normal hearing, relative to adults, and identify age-dependent changes in EFR characteristics during infancy. In 42 infants and 21 young adults, EFRs were elicited by the first (F1) and the second and higher formants (F2+) of the vowels /u/, /a/, and /i/, dominant in low and mid frequencies, respectively, and by amplitude-modulated fricatives /s/ and /∫/, dominant in high frequencies. In a subset of 20 infants, the in-ear stimulus level was adjusted to match that of an average adult ear (65 dB sound pressure level [SPL]). We found that (a) adult-infant differences in EFR amplitude, signal-to-noise ratio, and intertrial phase coherence were larger and spread across the frequency range when in-ear stimulus level was adjusted in infants, (b) adult-infant differences in EFR characteristics were the largest for low-frequency stimuli, (c) infants demonstrated adult-like phase coherence when they received a higher (i.e., unadjusted) stimulus level, and (d) EFR phase coherence and signal-to-noise ratio changed with age in the first year of life for a few F2+ vowel stimuli in a level-specific manner. Together, our findings reveal that development-related changes in EFRs during infancy likely vary by stimulus frequency, with low-frequency stimuli demonstrating the largest adult-infant differences. Consistent with previous research, our findings emphasize the significant role of stimulus level calibration methods while investigating developmental trends in EFRs.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, United States
- Waisman Center, University of Wisconsin-Madison, Madison, United States
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Susan Scollie
- National Centre for Audiology, Western University, London, Ontario, Canada
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| | - Michael Lasarev
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, United States
| | - Matthew Urichuk
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Health and Rehabilitation Sciences, Western University, London, Ontario, Canada
| | - Steven J Aiken
- School of Communication Sciences and Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
| | - David W Purcell
- National Centre for Audiology, Western University, London, Ontario, Canada
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| |
Collapse
|
12
|
Parida S, Heinz MG. Noninvasive Measures of Distorted Tonotopic Speech Coding Following Noise-Induced Hearing Loss. J Assoc Res Otolaryngol 2020; 22:51-66. [PMID: 33188506 DOI: 10.1007/s10162-020-00755-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 04/21/2020] [Indexed: 11/27/2022] Open
Abstract
Animal models of noise-induced hearing loss (NIHL) show a dramatic mismatch between cochlear characteristic frequency (CF, based on place of innervation) and the dominant response frequency in single auditory-nerve-fiber responses to broadband sounds (i.e., distorted tonotopy, DT). This noise trauma effect is associated with decreased frequency-tuning-curve (FTC) tip-to-tail ratio, which results from decreased tip sensitivity and enhanced tail sensitivity. Notably, DT is more severe for noise trauma than for metabolic (e.g., age-related) losses of comparable degree, suggesting that individual differences in DT may contribute to speech intelligibility differences in patients with similar audiograms. Although DT has implications for many neural-coding theories for real-world sounds, it has primarily been explored in single-neuron studies that are not viable with humans. Thus, there are no noninvasive measures to detect DT. Here, frequency following responses (FFRs) to a conversational speech sentence were recorded in anesthetized male chinchillas with either normal hearing or NIHL. Tonotopic sources of FFR envelope and temporal fine structure (TFS) were evaluated in normal-hearing chinchillas. Results suggest that FFR envelope primarily reflects activity from high-frequency neurons, whereas FFR-TFS receives broad tonotopic contributions. Representation of low- and high-frequency speech power in FFRs was also assessed. FFRs in hearing-impaired animals were dominated by low-frequency stimulus power, consistent with oversensitivity of high-frequency neurons to low-frequency power. These results suggest that DT can be diagnosed noninvasively. A normalized DT metric computed from speech FFRs provides a potential diagnostic tool to test for DT in humans. A sensitive noninvasive DT metric could be used to evaluate perceptual consequences of DT and to optimize hearing-aid amplification strategies to improve tonotopic coding for hearing-impaired listeners.
Collapse
Affiliation(s)
- Satyabrata Parida
- Weldon School of Biomedical Engineering, Purdue University, 206 South Martin Jischke Drive, West Lafayette, IN, 47907, USA
| | - Michael G Heinz
- Weldon School of Biomedical Engineering, Purdue University, 206 South Martin Jischke Drive, West Lafayette, IN, 47907, USA.
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN, 47907, USA.
| |
Collapse
|
13
|
Abstract
In this study, we sought to evaluate the efficiencies of multiple machine learning algorithms in detecting neonates' Frequency Following Responses (FFRs). We recorded continuous brainwaves from 43 American neonates in response to a pre-recorded monosyllable/i/with a rising frequency contour. Recordings were classified into response and no response categories. Six response features were extracted from each recording and served as predictors in FFR identification. Twenty-three supervised machine learning algorithms with mean efficiency values of 86.0%, 94.4%, 97.2%, and 97.5% when 1, 10, 100, and 1000 random iterations were implemented, respectively. These high efficiency values obtained from the neonatal FFRs demonstrate that machine learning algorithms can help assess pitch processing in neonates and can be applied to auditory screening and intervention services for neonates at risk for disorders associated with decreased pitch processing.
Collapse
Affiliation(s)
- Breanna N Hart
- Communication Sciences and Disorders, Ohio University, Athens, United States
| | - Fuh-Cherng Jeng
- Communication Sciences and Disorders, Ohio University, Athens, United States.,Department of Audiology and Speech Pathology, Asia University, Taichung
| |
Collapse
|
14
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
15
|
Hao W, Wang Q, Li L, Qiao Y, Gao Z, Ni D, Shang Y. Effects of Phase-Locking Deficits on Speech Recognition in Older Adults With Presbycusis. Front Aging Neurosci 2018; 10:397. [PMID: 30574084 PMCID: PMC6291518 DOI: 10.3389/fnagi.2018.00397] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 11/19/2018] [Indexed: 12/05/2022] Open
Abstract
Objective: People with presbycusis (PC) often report difficulties in speech recognition, especially under noisy listening conditions. Investigating the PC-related changes in central representations of envelope signals and temporal fine structure (TFS) signals of speech sounds is critical for understanding the mechanism underlying the PC-related deficit in speech recognition. Frequency-following responses (FFRs) to speech stimulation can be used to examine the subcortical encoding of both envelope and TFS speech signals. This study compared FFRs to speech signals between listeners with PC and those with clinically normal hearing (NH) under either quiet or noise-masking conditions. Methods: FFRs to a 170-ms speech syllable /da/ were recorded under either a quiet or noise-masking (with a signal-to-noise ratio (SNR) of 8 dB) condition in 14 older adults with PC and 13 age-matched adults with NH. The envelope (FFRENV) and TFS (FFRTFS) components of FFRs were analyzed separately by adding and subtracting the alternative polarity responses, respectively. Speech recognition in noise was evaluated in each participant. Results: In the quiet condition, compared with the NH group, the PC group exhibited smaller F0 and H3 amplitudes and decreased stimulus-response (S-R) correlation for FFRENV but not for FFRTFS. Both the H2 and H3 amplitudes and the S-R correlation of FFRENV significantly decreased in the noise condition compared with the quiet condition in the NH group but not in the PC group. Moreover, the degree of hearing loss was correlated with noise-induced changes in FFRTFS morphology. Furthermore, the speech-in-noise (SIN) threshold was negatively correlated with the noise-induced change in H2 (for FFRENV) and the S-R correlation for FFRENV in the quiet condition. Conclusion: Audibility affects the subcortical encoding of both envelope and TFS in PC patients. The impaired ability to adjust the balance between the envelope and TFS in the noise condition may be part of the mechanism underlying PC-related deficits in speech recognition in noise. FFRs can predict SIN perception performance.
Collapse
Affiliation(s)
- Wenyang Hao
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qian Wang
- Epilepsy Center, Department of Clinical Psychology, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Yufei Qiao
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Daofeng Ni
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yingying Shang
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
16
|
Abstract
Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy.
Collapse
Affiliation(s)
- Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland
| | - Kimberly Jenkins
- Department of Hearing and Speech Sciences, University of Maryland
| |
Collapse
|
17
|
Schoof T, Rosen S. The Role of Age-Related Declines in Subcortical Auditory Processing in Speech Perception in Noise. J Assoc Res Otolaryngol 2016; 17:441-60. [PMID: 27216166 DOI: 10.1007/s10162-016-0564-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Accepted: 03/17/2016] [Indexed: 10/29/2022] Open
Abstract
Older adults, even those without hearing impairment, often experience increased difficulties understanding speech in the presence of background noise. This study examined the role of age-related declines in subcortical auditory processing in the perception of speech in different types of background noise. Participants included normal-hearing young (19 - 29 years) and older (60 - 72 years) adults. Normal hearing was defined as pure-tone thresholds of 25 dB HL or better at octave frequencies from 0.25 to 4 kHz in both ears and at 6 kHz in at least one ear. Speech reception thresholds (SRTs) to sentences were measured in steady-state (SS) and 10-Hz amplitude-modulated (AM) speech-shaped noise, as well as two-talker babble. In addition, click-evoked auditory brainstem responses (ABRs) and envelope following responses (EFRs) in response to the vowel /ɑ/ in quiet, SS, and AM noise were measured. Of primary interest was the relationship between the SRTs and EFRs. SRTs were significantly higher (i.e., worse) by about 1.5 dB for older adults in two-talker babble but not in AM and SS noise. In addition, the EFRs of the older adults were less robust compared to the younger participants in quiet, AM, and SS noise. Both young and older adults showed a "neural masking release," indicated by a more robust EFR at the trough compared to the peak of the AM masker. The amount of neural masking release did not differ between the two age groups. Variability in SRTs was best accounted for by audiometric thresholds (pure-tone average across 0.5-4 kHz) and not by the EFR in quiet or noise. Aging is thus associated with a degradation of the EFR, both in quiet and noise. However, these declines in subcortical neural speech encoding are not necessarily associated with impaired perception of speech in noise, as measured by the SRT, in normal-hearing older adults.
Collapse
|
18
|
King A, Hopkins K, Plack CJ. Differential Group Delay of the Frequency Following Response Measured Vertically and Horizontally. J Assoc Res Otolaryngol 2016; 17:133-43. [PMID: 26920344 PMCID: PMC4791418 DOI: 10.1007/s10162-016-0556-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Accepted: 02/04/2016] [Indexed: 11/24/2022] Open
Abstract
The frequency following response (FFR) arises from the sustained neural activity of a population of neurons that are phase locked to periodic acoustic stimuli. Determining the source of the FFR noninvasively may be useful for understanding the function of phase locking in the auditory pathway to the temporal envelope and fine structure of sounds. The current study compared the FFR recorded with a horizontally aligned (mastoid-to-mastoid) electrode montage and a vertically aligned (forehead-to-neck) electrode montage. Unlike previous studies, envelope and fine structure latencies were derived simultaneously from the same narrowband stimuli to minimize differences in cochlear delay. Stimuli were five amplitude-modulated tones centered at 576 Hz, each with a different modulation rate, resulting in different side-band frequencies across stimulus conditions. Changes in response phase across modulation frequency and side-band frequency (group delay) were used to determine the latency of the FFR reflecting phase locking to the envelope and temporal fine structure, respectively. For the FFR reflecting phase locking to the temporal fine structure, the horizontal montage had a shorter group delay than the vertical montage, suggesting an earlier generation source within the auditory pathway. For the FFR reflecting phase locking to the envelope, group delay was longer than that for the fine structure FFR, and no significant difference in group delay was found between montages. However, it is possible that multiple sources of FFR (including the cochlear microphonic) were recorded by each montage, complicating interpretations of the group delay.
Collapse
Affiliation(s)
- Andrew King
- School of Psychological Sciences, University of Manchester, Manchester Academic Health Science Centre, Manchester, Greater Manchester M13 9PL UK
| | - Kathryn Hopkins
- School of Psychological Sciences, University of Manchester, Manchester Academic Health Science Centre, Manchester, Greater Manchester M13 9PL UK
| | - Christopher J. Plack
- School of Psychological Sciences, University of Manchester, Manchester Academic Health Science Centre, Manchester, Greater Manchester M13 9PL UK
| |
Collapse
|
19
|
Kumar K, Bhat JS, D'Costa PE, Srivastava M, Kalaiah MK. Effect of Stimulus Polarity on Speech Evoked Auditory Brainstem Response. Audiol Res 2014; 3:e8. [PMID: 26557347 PMCID: PMC4627129 DOI: 10.4081/audiores.2013.e8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Revised: 11/20/2013] [Accepted: 12/11/2013] [Indexed: 11/22/2022] Open
Abstract
The aim of the present study was to investigate the effect of stimulus polarity on speech evoked auditory brainstem response (ABR). In order to accomplish it, speech evoked ABR was recorded with various stimulus polarities from 17 normally hearing adults. The result of the study shows differential effect of stimulus polarity on components of speech evoked ABR. Latency of peaks for onset, sustained and offset responses of speech evoked ABR were found to be not significantly different across stimulus polarities. In contrast, the amplitude of first formant and high frequency components was found to be significantly reduced for alternating polarity compared to single polarity, while amplitude of fundamental frequency response was not affected by polarity of the stimuli. Thus speech evoked ABR may be recorded using single polarity rather than using alternating polarities.
Collapse
Affiliation(s)
- Kaushlendra Kumar
- Department of Audiology and Speech Language Pathology, Kasturba Medical College (Manipal University) , Mangalore, Karanataka, India
| | - Jayashree S Bhat
- Department of Audiology and Speech Language Pathology, Kasturba Medical College (Manipal University) , Mangalore, Karanataka, India
| | - Pearl Edna D'Costa
- Department of Audiology and Speech Language Pathology, Kasturba Medical College (Manipal University) , Mangalore, Karanataka, India
| | - Manav Srivastava
- Department of Audiology and Speech Language Pathology, Kasturba Medical College (Manipal University) , Mangalore, Karanataka, India
| | - Mohan Kumar Kalaiah
- Department of Audiology and Speech Language Pathology, Kasturba Medical College (Manipal University) , Mangalore, Karanataka, India
| |
Collapse
|
20
|
Jafari Z, Malayeri S. Effects of congenital blindness on the subcortical representation of speech cues. Neuroscience 2013; 258:401-9. [PMID: 24291729 DOI: 10.1016/j.neuroscience.2013.11.027] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Revised: 10/28/2013] [Accepted: 11/14/2013] [Indexed: 11/18/2022]
Abstract
Human modalities play a vital role in the way the brain produces mental representations of the world around us. Although congenital blindness limits the understanding of the environment in some aspects, blind individuals may have other superior capabilities from long-term experience and neural plasticity. This study investigated the effects of congenital blindness on temporal and spectral neural encoding of speech at the subcortical level. The study included 26 congenitally blind individuals and 24 normal-sighted individuals with normal hearing. Auditory brainstem response (ABR) was recorded with both click and speech synthetic 40-ms /da/ stimuli. No significant difference was observed between the two groups in wave latencies or amplitudes of click ABR. Latencies of speech ABR D (p=0.012) and O (p=0.014) waves were significantly shorter in blind individuals than in normal-sighted individuals. Amplitudes of the A (p<0.001) and E (p=0.001) speech ABR (sABR) waves were also significantly higher in blind subjects. Blind individuals had significantly better results for duration (p<0.001) amplitude (p=0.015) and slope of the V-A complex (p=0.004), signal-to-noise ratio (p<0.001), and amplitude of the stimulus fundamental frequency (F0) (p=0.009), first formant (F1) (p<0.001) and higher-frequency region (HF) (p<0.001) ranges. Results indicate that congenitally blind subjects have improved hearing function in response to the /da/ syllable in both source and filter classes of sABR. It is possible that these subjects have enhanced neural representation of vocal cord vibrations and improved neural synchronization in temporal encoding of the onset and offset parts of speech stimuli at the brainstem level. This may result from the compensatory mechanism of neural reorganization in blind subjects influenced from top-down corticofugal connections with the auditory cortex.
Collapse
Affiliation(s)
- Z Jafari
- Rehabilitation Research Center (RRC), Iran University of Medical Sciences (IUMS), Tehran, Iran; Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | - S Malayeri
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran; NEWSHA Hearing Institute, Tehran, Iran.
| |
Collapse
|