1
|
Jeddi Z, Doosti A, Hajimohammadi A, Asadollahi A. Association among Extended High-Frequency Hearing, Speech Perception in Noise, and Auditory Temporal-Spectral Processing in Adults with Normal Hearing: A Cross-Sectional Study. Indian J Otolaryngol Head Neck Surg 2025; 77:2318-2325. [PMID: 40420887 PMCID: PMC12103398 DOI: 10.1007/s12070-025-05496-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 04/16/2025] [Indexed: 05/28/2025] Open
Abstract
Speech comprehension relies on temporal and spectral processing, which can be impaired by background noise. Traditional pure-tone audiometry in standard frequencies has limitations in predicting speech understanding under such conditions. Extended High Frequencies (EHFs) contribute to audibility and speech recognition. This study aims to investigate the correlation between EHF hearing, auditory temporal-spectral processing, and speech comprehension in background noise, and to analyze the factors influencing speech perception in noise ability in adults with normal hearing, emphasizing the importance of EHF evaluation in clinical settings. This cross-sectional study included 44 participants with normal auditory function. The participants underwent high-frequency audiometry, the Quick speech-in-noise (SIN) test for SIN comprehension evaluation, the Gap in Noise (GIN) test for temporal resolution assessment, and the Spectral-Temporally Modulated Ripple Test (SMRT) for spectral resolution examination. The study found a significant correlation between mean EHF thresholds and SNR loss in both common (P-value = 0.019) and high-frequency (P-value = 0.033) words lists. However, no significant correlations were observed between mean EHF thresholds and mean GIN thresholds, GIN percentage of correct answers, or SMRT results. Regression analysis revealed that EHF thresholds significantly contribute to predicting SNR loss. Our findings demonstrate a correlation between EHF thresholds and SIN performance but not with spectro-temporal resolution. However, significant associations between spectro-temporal resolution and SIN performance were observed. Combining these assessments may enhance the prediction of SIN difficulties, facilitating targeted rehabilitation strategies.
Collapse
Affiliation(s)
- Zahra Jeddi
- Department of Audiology, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
- Orthopedic & Rehabilitation Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Afsaneh Doosti
- Department of Audiology, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
- Orthopedic & Rehabilitation Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ali Hajimohammadi
- Department of Audiology, School of Rehabilitation Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
- Orthopedic & Rehabilitation Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Abdolrahim Asadollahi
- Department of Gerontology, School of Health, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
2
|
Chou TL, Gall MD. I can't hear you: effects of noise on auditory processing in mixed-species flocks. J Exp Biol 2025; 228:jeb250033. [PMID: 40223503 DOI: 10.1242/jeb.250033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Accepted: 04/09/2025] [Indexed: 04/15/2025]
Abstract
Animals have evolved complex auditory systems to extract acoustic information from natural environmental noise, yet they are challenged by rising levels of novel anthropogenic noise. Songbirds adjust their vocal production in response to increasing noise, but auditory processing of signals in noise remains understudied. Auditory processing characteristics, including auditory filter bandwidth, filter efficiency and critical ratios (level-independent signal-to-noise ratios at threshold), likely influence auditory and behavioral responses to noise. Here, we investigated the effects of noise on auditory processing in three songbird species (black-capped chickadees, tufted titmice and white-breasted nuthatches) that live in mixed-species flocks and rely on heterospecific communication to coordinate mobbing behaviors. We determined masked thresholds and critical ratios from 1 to 4 kHz using auditory evoked potentials. We predicted that nuthatches would have the lowest critical ratios given that they have narrowest filters, followed by titmice and then chickadees. We found that nuthatches had the greatest sensitivity in quiet conditions, but the highest critical ratios, suggesting their auditory sensitivity is highly susceptible to noise. Titmice had the lowest critical ratios, suggesting relatively minor impacts of noise on their auditory processing. This is not consistent with predictions based on auditory filter bandwidth, but is consistent with both recent behavioral findings and predictions made by auditory filter efficiency measures. Detrimental effects of noise were most prevalent in the 2-4 kHz range, frequencies produced in vocalizations. Our results using the critical ratio as a measure of processing in noise suggest that low levels of anthropogenic noise may influence these three species differently.
Collapse
Affiliation(s)
- Trina L Chou
- Neuroscience and Behavior Program, Vassar College, Poughkeepsie, NY 12604, USA
| | - Megan D Gall
- Neuroscience and Behavior Program, Vassar College, Poughkeepsie, NY 12604, USA
- Biology Department, Vassar College, Poughkeepsie, NY 12604, USA
| |
Collapse
|
3
|
Kang S, Woo J, Lee KM, Seol HY, Hong SH, Moon IJ. Feasibility of an Objective Approach Using Acoustic Change Complex for Evaluating Spectral Resolution in Individuals with Normal Hearing and Hearing Loss. J Integr Neurosci 2025; 24:25911. [PMID: 40152568 DOI: 10.31083/jin25911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 11/14/2024] [Accepted: 11/18/2024] [Indexed: 03/29/2025] Open
Abstract
BACKGROUND Identifying the temporal and spectral information in sound is important for understanding speech; indeed, a person who has good spectral resolution usually shows good speech recognition performance. The spectral ripple discrimination (SRD) test is often used to behaviorally determine spectral resolution capacity. However, although the SRD test is useful, it is difficult to apply to populations who cannot execute the behavioral task, such as younger children and people with disabilities. In this study, an objective approach using spectral ripple (SR) stimuli to evoke the acoustic change complex (ACC) response was investigated to determine whether it could objectively evaluate the spectral resolution ability of subjects with normal hearing (NH) and those with hearing loss (HL). METHOD Ten subjects with NH and eight with HL were enrolled in this study. All subjects completed the behavioral SRD test and the objective SR-ACC test. Additionally, the HL subjects completed speech perception performance tests while wearing hearing aids. RESULTS In the SRD test, the average thresholds were 6.48 and 1.52 ripples per octave (RPO) for the NH and HL groups, respectively, while in the SR-ACC test, they were 4.90 and 1.35 RPO, respectively. There was a significant difference in the average thresholds between the two groups for the SRD (p < 0.001) and the SR-ACC (p < 0.001) tests. A significant positive correlation was observed between the SRD and SR-ACC tests (ρ = 0.829, p < 0.001). In the HL group, there was a statistically significant relationship between speech recognition performance in noisy conditions and the SR-ACC threshold (ρ = 0.911, p < 0.001 in Sentence score of Korean Speech Audiometry (KSA)). CONCLUSIONS The results supported the feasibility of the SR-ACC test to objectively evaluate auditory spectral resolution in individuals with HL. This test has potential for use in individuals with HL who are unable to complete the behavioral task associated with the SRD test; therefore, it is proposed as a more inclusive alternative to the SRD test.
Collapse
Affiliation(s)
- Soojin Kang
- Center for Digital Humanities and Computational Social Sciences, Korea Advanced Institute of Science and Technology, 34141 Daejeon, Republic of Korea
| | - Jihwan Woo
- Department of Biomedical Engineering, School of Electrical Engineering, University of Ulsan, 44610 Ulsan, Republic of Korea
| | - Kyung Myun Lee
- Center for Digital Humanities and Computational Social Sciences, Korea Advanced Institute of Science and Technology, 34141 Daejeon, Republic of Korea
- School of Digital Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, 34141 Daejeon, Republic of Korea
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, 34141 Daejeon, Republic of Korea
| | - Hye Yoon Seol
- Department of Communication Disorders, Ewha Womans University, 03760 Seoul, Republic of Korea
| | - Sung Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, 06351 Seoul, Republic of Korea
- Department of Otolaryngology-Head and Neck Surgery, Soree Ear Clinic, 02143 Seoul, Republic of Korea
| | - Il Joon Moon
- Hearing Research Laboratory, Samsung Medical Center, 06351 Seoul, Republic of Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, 06351 Seoul, Republic of Korea
| |
Collapse
|
4
|
Jeon EK, Driscoll V, Mussoi BS, Scheperle R, Guthe E, Gfeller K, Abbas PJ, Brown CJ. Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training. Ear Hear 2025; 46:150-162. [PMID: 39044323 PMCID: PMC11649490 DOI: 10.1097/aud.0000000000001569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures. DESIGN This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics. RESULTS In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training. CONCLUSIONS This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | | | - Bruna S. Mussoi
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Rachel Scheperle
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| | - Emily Guthe
- Department of Music Therapy, Cleveland State University, Cleveland, Ohio
| | - Kate Gfeller
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| | - Paul J. Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| | - Carolyn J. Brown
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
5
|
Oberfeld D, Staab K, Kattner F, Ellermeier W. Is Recognition of Speech in Noise Related to Memory Disruption Caused by Irrelevant Sound? Trends Hear 2024; 28:23312165241262517. [PMID: 39051688 PMCID: PMC11273587 DOI: 10.1177/23312165241262517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 04/24/2024] [Accepted: 05/21/2024] [Indexed: 07/27/2024] Open
Abstract
Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.
Collapse
Affiliation(s)
- Daniel Oberfeld
- Institute of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Germany
| | - Katharina Staab
- Department of Marketing and Human Resource Management, Technische Universität Darmstadt, Darmstadt, Germany
| | - Florian Kattner
- Institut für Psychologie, Technische Universität Darmstadt, Darmstadt, Germany
| | - Wolfgang Ellermeier
- Institut für Psychologie, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
6
|
Súsonnudóttir B, Kowalewski B, Stiefenhofer G, Neher T. Individual Differences Underlying Preference for Processing Delay in Open-Fit Hearing Aids. Trends Hear 2024; 28:23312165241298613. [PMID: 39668739 PMCID: PMC11638989 DOI: 10.1177/23312165241298613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 10/03/2024] [Accepted: 10/23/2024] [Indexed: 12/14/2024] Open
Abstract
In open-fit digital hearing aids (HAs), the processing delay influences comb-filter effects that arise from the interaction of the processed HA sound with the unprocessed direct sound. The current study investigated potential relations between preferred processing delay, spectral and temporal processing abilities, and self-reported listening habits. Ten listeners with normal hearing and 20 listeners with mild-to-moderate sensorineural hearing impairments participated. Using a HA simulator, delay preference was assessed with a paired-comparison task, three types of stimuli, and five processing delays (0, 0.5, 2, 5, and 10 ms). Spectral processing was assessed with a spectral ripple discrimination (SRD) task. Temporal processing was assessed with a gap detection task. Self-reported listening habits were assessed using a shortened version of the 'sound preference and hearing habits' questionnaire. A linear mixed-effects model showed a strong effect of processing delay on preference scores (p < .001, η2 = 0.30). Post-hoc comparisons revealed no differences between either the two shortest delays or the three longer delays (all p > .05) but a clear difference between the two sets of delays (p < .001). A multiple linear regression analysis showed SRD to be a significant predictor of delay preference (p < .01, η2 = 0.29), with good spectral processing abilities being associated with a preference for short processing delay. Overall, these results indicate that assessing spectral processing abilities can guide the prescription of open-fit HAs.
Collapse
Affiliation(s)
- Borgný Súsonnudóttir
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Technical Audiology Section, Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
- WS Audiology A/S, Lynge, Denmark
| | | | | | - Tobias Neher
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Technical Audiology Section, Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
7
|
López-Ramos D, Marrufo-Pérez MI, Eustaquio-Martín A, López-Bascuas LE, Lopez-Poveda EA. Adaptation to Noise in Spectrotemporal Modulation Detection and Word Recognition. Trends Hear 2024; 28:23312165241266322. [PMID: 39267369 PMCID: PMC11401146 DOI: 10.1177/23312165241266322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 06/10/2024] [Accepted: 06/12/2024] [Indexed: 09/17/2024] Open
Abstract
Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: -0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.
Collapse
Affiliation(s)
- David López-Ramos
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Miriam I. Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Luis E. López-Bascuas
- Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Madrid, Spain
| | - Enrique A. Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| |
Collapse
|
8
|
Choi I, Gander PE, Berger JI, Woo J, Choy MH, Hong J, Colby S, McMurray B, Griffiths TD. Spectral Grouping of Electrically Encoded Sound Predicts Speech-in-Noise Performance in Cochlear Implantees. J Assoc Res Otolaryngol 2023; 24:607-617. [PMID: 38062284 PMCID: PMC10752853 DOI: 10.1007/s10162-023-00918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
OBJECTIVES Cochlear implant (CI) users exhibit large variability in understanding speech in noise. Past work in CI users found that spectral and temporal resolution correlates with speech-in-noise ability, but a large portion of variance remains unexplained. Recent work on normal-hearing listeners showed that the ability to group temporally and spectrally coherent tones in a complex auditory scene predicts speech-in-noise ability independently of the audiogram, highlighting a central mechanism for auditory scene analysis that contributes to speech-in-noise. The current study examined whether the auditory grouping ability also contributes to speech-in-noise understanding in CI users. DESIGN Forty-seven post-lingually deafened CI users were tested with psychophysical measures of spectral and temporal resolution, a stochastic figure-ground task that depends on the detection of a figure by grouping multiple fixed frequency elements against a random background, and a sentence-in-noise measure. Multiple linear regression was used to predict sentence-in-noise performance from the other tasks. RESULTS No co-linearity was found between any predictor variables. All three predictors (spectral and temporal resolution plus the figure-ground task) exhibited significant contribution in the multiple linear regression model, indicating that the auditory grouping ability in a complex auditory scene explains a further proportion of variance in CI users' speech-in-noise performance that was not explained by spectral and temporal resolution. CONCLUSION Measures of cross-frequency grouping reflect an auditory cognitive mechanism that determines speech-in-noise understanding independently of cochlear function. Such measures are easily implemented clinically as predictors of CI success and suggest potential strategies for rehabilitation based on training with non-speech stimuli.
Collapse
Affiliation(s)
- Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA.
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Phillip E Gander
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, Republic of Korea
| | - Matthew H Choy
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| | - Jean Hong
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Sarah Colby
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr., Iowa City, IA, 52242, USA
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
| |
Collapse
|
9
|
Jorgensen E, Wu YH. Effects of entropy in real-world noise on speech perception in listeners with normal hearing and hearing lossa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3627-3643. [PMID: 38051522 PMCID: PMC10699887 DOI: 10.1121/10.0022577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/07/2023] [Accepted: 11/10/2023] [Indexed: 12/07/2023]
Abstract
Hearing aids show more benefit in traditional laboratory speech-in-noise tests than in real-world noisy environments. Real-world noise comprises a large range of acoustic properties that vary randomly and rapidly between and within environments, making quantifying real-world noise and using it in experiments and clinical tests challenging. One approach is to use acoustic features and statistics to quantify acoustic properties of real-world noise and control for them or measure their relationship to listening performance. In this study, the complexity of real-world noise from different environments was quantified using entropy in both the time- and frequency-domains. A distribution of noise segments from low to high entropy were extracted. Using a trial-by-trial design, listeners with normal hearing and hearing loss (in aided and unaided conditions) repeated back sentences embedded in these noise segments. Entropy significantly affected speech perception, with a larger effect of entropy in the time-domain than the frequency-domain, a larger effect for listeners with normal hearing than for listeners with hearing loss, and a larger effect for listeners with hearing loss in the aided than unaided condition. Speech perception also differed between most environment types. Combining entropy with the environment type improved predictions of speech perception above the environment type alone.
Collapse
Affiliation(s)
- Erik Jorgensen
- Department of Communication Sciences and Disorders University of Wisconsin-Madison, Madison, Wisconsin 53706, USA
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| |
Collapse
|
10
|
Benoit C, Carlson RJ, King MC, Horn DL, Rubinstein JT. Behavioral characterization of the cochlear amplifier lesion due to loss of function of stereocilin (STRC) in human subjects. Hear Res 2023; 439:108898. [PMID: 37890241 PMCID: PMC10756798 DOI: 10.1016/j.heares.2023.108898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/12/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
Loss of function of stereocilin (STRC) is the second most common cause of inherited hearing loss. The loss of the stereocilin protein, encoded by the STRC gene, induces the loss of connection between outer hair cells and tectorial membrane. This only affects the outer hair cells (OHCs) function, involving deficits of active cochlear frequency selectivity and amplifier functions despite preservation of normal inner hair cells. Better understanding of cochlear features associated with mutation of STRC will improve our knowledge of normal cochlear function, the pathophysiology of hearing impairment, and potentially enhance hearing aid and cochlear implant signal processing. Nine subjects with homozygous or compound heterozygous loss of function mutations in STRC were included, age 7-24 years. Temporal and spectral modulation perception were measured, characterized by spectral and temporal modulation transfer functions. Speech-in-noise perception was studied with spondee identification in adaptive steady-state noise and AzBio sentences with 0 and -5 dB SNR multitalker babble. Results were compared with normal hearing (NH) and cochlear implant (CI) listeners to place STRC-/- listeners' hearing capacity in context. Spectral ripple discrimination thresholds in the STRC-/- subjects were poorer than in NH listeners (p < 0.0001) but remained better than for CI listeners (p < 0.0001). Frequency resolution appeared impaired in the STRC-/- group compared to NH listeners but did not reach statistical significance (p = 0.06). Compared to NH listeners, amplitude modulation detection thresholds in the STRC-/- group did not reach significance (p= 0.06) but were better than in CI subjects (p < 0.0001). Temporal resolution in STRC-/- subjects was similar to NH (p = 0.98) but better than in CI listeners (p = 0.04). The spondee reception threshold in the STRC-/- group was worse than NH listeners (p = 0.0008) but better than CI listeners (p = 0.0001). For AzBio sentences, performance at 0 dB SNR was similar between the STRC-/- group and the NH group, 88 % and 97 % respectively. For -5 dB SNR, the STRC-/- performance was significantly poorer than NH, 40 % and 85 % respectively, yet much better than with CI who performed at 54 % at +5 dB SNR in children and 53 % at + 10 dB SNR in adults. To our knowledge, this is the first study of the psychoacoustic performance of human subjects lacking cochlear amplification but with normal inner hair cell function. Our data demonstrate preservation of temporal resolution and a trend to impaired frequency resolution in this group without reaching statistical significance. Speech-in-noise perception compared to NH listeners was impaired as well. All measures were better than those in CI listeners. It remains to be seen if hearing aid modifications, customized for the spectral deficits in STRC-/- listeners can improve speech understanding in noise. Since cochlear implants are also limited by deficient spectral selectivity, STRC-/- hearing may provide an upper bound on what could be obtained with better temporal coding in electrical stimulation.
Collapse
Affiliation(s)
- Charlotte Benoit
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA.
| | - Ryan J Carlson
- Departments of Genome Sciences and Medicine, University of Washington, Seattle, WA, USA
| | - Mary-Claire King
- Departments of Genome Sciences and Medicine, University of Washington, Seattle, WA, USA
| | - David L Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA; Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA; Division of Pediatric Otolaryngology, Department of Surgery, Seattle Children's Hospital, Seattle, WA, USA
| | - Jay T Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA; Department of Bioengineering, University of Washington, Seattle, WA, USA
| |
Collapse
|
11
|
Seçen Yazıcı M, Serdengeçti N, Dikmen M, Koyuncu Z, Sandıkçı B, Arslan B, Acar M, Kara E, Tarakçıoğlu MC, Kadak MT. Evaluation of p300 and spectral resolution in children with attention deficit hyperactivity disorder and specific learning disorder. Psychiatry Res Neuroimaging 2023; 334:111688. [PMID: 37517295 DOI: 10.1016/j.pscychresns.2023.111688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 06/11/2023] [Accepted: 07/17/2023] [Indexed: 08/01/2023]
Abstract
This study aims to examine auditory processing, P300 values and functional impairment levels among children with Attention Deficit and Hyperactivity Disorder (ADHD), Specific Learning Disorder (SLD), ADHD+SLD and healthy controls. Children with ADHD (n = 17), SLD (n = 15), ADHD+SLD (n = 15), and healthy controls (n = 15) between the ages of 7-12 were evaluated with K-SADS, Weiss Functional Impairment Rating Scale, Turgay DSM-IV Disruptive Behavior Disorders Rating Scale, The Mathematics, Reading, Writing Assessment Scale and Children's Auditory Performance Scale (CHAPS). Auditory P300 event-related potentials and Spectral-Temporally Modulated Ripple Test (SMRT) were applied. Three patient groups were found to be riskier than healthy controls according to the CHAPS. There was no significant difference between the groups in the SMRT. In post-hoc analyses of P300 parietal amplitudes, ADHD, SLD, and ADHD+SLD were found to be significantly lower than the control group. The amplitudes of the ADHD+SLD were by far the lowest. It has been shown that auditory performance skills and p300 amplitudes are lower in children diagnosed with only ADHD or SLD compared to the control group, with the lowest values observed in ADHD+SLD. This study suggests that the difficulties with attention and cognitive functions in the ADHD+SLD are more severe than ADHD and/or SLD without comorbidity.
Collapse
Affiliation(s)
- Meryem Seçen Yazıcı
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey.
| | - Nihal Serdengeçti
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Merve Dikmen
- Research Institute for Health Sciences and Technologies (SABITA), Regenerative and Restorative Medicine Research Center (REMER), Clinical Electrophysiology, Neuroimaging and Neuromodulation Lab, Istanbul Medipol University, Istanbul, Turkey; Vocational School of Health Services, Program of Electroneurophysiology, Istanbul Medipol University, Istanbul, Turkey
| | - Zehra Koyuncu
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Beyza Sandıkçı
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Büşra Arslan
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Melda Acar
- Department of Audiology, Faculty of Health Science, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Eyyup Kara
- Department of Audiology, Faculty of Health Science, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Mahmut Cem Tarakçıoğlu
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Muhammed Tayyib Kadak
- Department of Child and Adolescent Psychiatry, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| |
Collapse
|
12
|
Margolis RH, Rao A, Wilson RH, Saly GL. Non-linguistic auditory speech processing. Int J Audiol 2023; 62:217-226. [PMID: 35369837 DOI: 10.1080/14992027.2022.2055654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVES A method for testing auditory processing of non-linguistic speech-like stimuli was developed and evaluated. DESIGN Monosyllabic words were temporally reversed and distorted. Stimuli were matched for spectrum and level. Listeners discriminated between distorted and undistorted stimuli. STUDY SAMPLE Three groups were tested. The Normal group was comprised of 12 normal-hearing participants. The Senior group was comprised of 12 seniors. The Hearing Loss group was comprised of 12 participants with thresholds of at least 35 dB HL at one or more frequencies. RESULTS The Senior group scored lower than the Normal group, and the Hearing Loss group scored lower than the Senior group. Scores for forward compressed speech were slightly higher than backward compressed speech but the difference was not statistically significant. Retest scores were slightly higher than scores on the first test, but the difference was not statistically significant. CONCLUSIONS Large differences in discrimination of distorted speech were observed among the three groups. Age and hearing loss separately affected performance. The depressed performance of the Senior group may be a result of "hidden hearing loss" that is attributed to cochlear synaptopathy. The backward-distorted speech task may be a useful non-linguistic test of speech processing that is language independent.
Collapse
Affiliation(s)
- Robert H Margolis
- Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona.,Audiology Incorporated, Arden Hills, Minnesota, USA
| | - Aparna Rao
- Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona
| | - Richard H Wilson
- Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona
| | | |
Collapse
|
13
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
14
|
Brennan MA, McCreery RW, Massey J. Influence of Audibility and Distortion on Recognition of Reverberant Speech for Children and Adults with Hearing Aid Amplification. J Am Acad Audiol 2022; 33:170-180. [PMID: 34695870 PMCID: PMC9112843 DOI: 10.1055/a-1678-3381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Adults and children with sensorineural hearing loss (SNHL) have trouble understanding speech in rooms with reverberation when using hearing aid amplification. While the use of amplitude compression signal processing in hearing aids may contribute to this difficulty, there is conflicting evidence on the effects of amplitude compression settings on speech recognition. Less clear is the effect of a fast release time for adults and children with SNHL when using compression ratios derived from a prescriptive procedure. PURPOSE The aim of the study is to determine whether release time impacts speech recognition in reverberation for children and adults with SNHL and to determine if these effects of release time and reverberation can be predicted using indices of audibility or temporal and spectral distortion. RESEARCH DESIGN This is a quasi-experimental cohort study. Participants used a hearing aid simulator set to the Desired Sensation Level algorithm m[i/o] for three different amplitude compression release times. Reverberation was simulated using three different reverberation times. PARTICIPANTS Participants were 20 children and 16 adults with SNHL. DATA COLLECTION AND ANALYSES Participants were seated in a sound-attenuating booth and then nonsense syllable recognition was measured. Predictions of speech recognition were made using indices of audibility, temporal distortion, and spectral distortion and the effects of release time and reverberation were analyzed using linear mixed models. RESULTS While nonsense syllable recognition decreased in reverberation release time did not significantly affect nonsense syllable recognition. Participants with lower audibility were more susceptible to the negative effect of reverberation on nonsense syllable recognition. CONCLUSION We have extended previous work on the effects of reverberation on aided speech recognition to children with SNHL. Variations in release time did not impact the understanding of speech. An index of audibility best predicted nonsense syllable recognition in reverberation and, clinically, these results suggest that patients with less audibility are more susceptible to nonsense syllable recognition in reverberation.
Collapse
|
15
|
Veugen LCE, van Opstal AJ, van Wanrooij MM. Reaction Time Sensitivity to Spectrotemporal Modulations of Sound. Trends Hear 2022; 26:23312165221127589. [PMID: 36172759 PMCID: PMC9523861 DOI: 10.1177/23312165221127589] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 07/18/2022] [Accepted: 09/02/2022] [Indexed: 11/24/2022] Open
Abstract
We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0-8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0-64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a "best-of-both-worlds" principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.
Collapse
Affiliation(s)
- Lidwien C. E. Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
16
|
Moberly AC, Lewis JH, Vasil KJ, Ray C, Tamati TN. Bottom-Up Signal Quality Impacts the Role of Top-Down Cognitive-Linguistic Processing During Speech Recognition by Adults with Cochlear Implants. Otol Neurotol 2021; 42:S33-S41. [PMID: 34766942 PMCID: PMC8597903 DOI: 10.1097/mao.0000000000003377] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
HYPOTHESES Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. BACKGROUND Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. METHODS Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. RESULTS Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). CONCLUSIONS Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Jessica H Lewis
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Kara J Vasil
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Christin Ray
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Terrin N Tamati
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
17
|
Horn D, Walter M, Rubinstein J, Lau BK. Electrophysiological responses to spectral ripple envelope phase inversion in typical hearing 2- to 4-month-olds. PROCEEDINGS OF MEETINGS ON ACOUSTICS. ACOUSTICAL SOCIETY OF AMERICA 2021; 45:050003. [PMID: 35891886 PMCID: PMC9311477 DOI: 10.1121/2.0001558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- David Horn
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Max Walter
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Jay Rubinstein
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Bonnie K. Lau
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| |
Collapse
|
18
|
Brennan MA, McCreery RW. Audibility and Spectral-Ripple Discrimination Thresholds as Predictors of Word Recognition with Nonlinear Frequency Compression. J Am Acad Audiol 2021; 32:596-605. [PMID: 35176803 PMCID: PMC9112840 DOI: 10.1055/s-0041-1732333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Nonlinear frequency compression (NFC) lowers high-frequency sounds to a lower frequency and is used to improve high-frequency audibility. However, the efficacy of NFC varies widely-while some individuals benefit from NFC, many do not. Spectral resolution is one factor that might explain individual benefit from NFC. Because individuals with better spectral resolution understand more speech than those with poorer spectral resolution, it was hypothesized that individual benefit from NFC could be predicted from the change in spectral resolution measured with NFC relative to a condition without NFC. PURPOSE This study aimed to determine the impact of NFC on access to spectral information and whether these changes predict individual benefit from NFC for adults with sensorineural hearing loss (SNHL). RESEARCH DESIGN Present study is a quasi-experimental cohort study. Participants used a pair of hearing aids set to the Desired Sensation Level algorithm (DSL m[i/o]). STUDY SAMPLE Participants were 19 adults with SNHL, recruited from the Boys Town National Research Hospital Participant Registry. DATA COLLECTION AND ANALYSIS Participants were seated in a sound-attenuating booth and then percent-correct recognition of words, and spectral-ripple discrimination thresholds were measured for two different conditions, with and without NFC. Because audibility is known to influence spectral-ripple thresholds and benefit from NFC, audibility was quantified using the aided speech intelligibility index (SII). Linear mixed models were generated to predict word recognition using the aided SII and spectral-ripple discrimination thresholds. RESULTS While NFC did not influence percent-correct word recognition, participants with higher (better) aided SII and spectral-ripple discrimination thresholds understood more words than those with either a lower aided SII or spectral-ripple discrimination threshold. Benefit from NFC was not predictable from a participant's aided SII or spectral-ripple discrimination threshold. CONCLUSION We have extended previous work on the effect of audibility on benefit from NFC to include a measure of spectral resolution, the spectral-ripple discrimination threshold. Clinically, these results suggest that patients with better audibility and spectral resolution will understand speech better than those with poorer audibility or spectral resolution; however, these results are inconsistent with the notion that individual benefit from NFC is predictable from aided audibility or spectral resolution.
Collapse
|
19
|
Relationship between objective measures of hearing discrimination elicited by non-linguistic stimuli and speech perception in adults. Sci Rep 2021; 11:19554. [PMID: 34599244 PMCID: PMC8486784 DOI: 10.1038/s41598-021-98950-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 09/14/2021] [Indexed: 11/08/2022] Open
Abstract
Some people using hearing aids have difficulty discriminating between sounds even though the sounds are audible. As such, cochlear implants may provide greater benefits for speech perception. One method to identify people with auditory discrimination deficits is to measure discrimination thresholds using spectral ripple noise (SRN). Previous studies have shown that behavioral discrimination of SRN was associated with speech perception, and behavioral discrimination was also related to cortical responses to acoustic change or ACCs. We hypothesized that cortical ACCs could be directly related to speech perception. In this study, we investigated the relationship between subjective speech perception and objective ACC responses measured using SRNs. We tested 13 normal-hearing and 10 hearing-impaired adults using hearing aids. Our results showed that behavioral SRN discrimination was correlated with speech perception in quiet and in noise. Furthermore, cortical ACC responses to phase changes in the SRN were significantly correlated with speech perception. Audibility was a major predictor of discrimination and speech perception, but direct measures of auditory discrimination could contribute information about a listener’s sensitivity to acoustic cues that underpin speech perception. The findings lend support for potential application of measuring ACC responses to SRNs for identifying people who may benefit from cochlear implants.
Collapse
|
20
|
Nittrouer S, Lowenstein JH, Sinex DG. The contribution of spectral processing to the acquisition of phonological sensitivity by adolescent cochlear implant users and normal-hearing controls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2116. [PMID: 34598601 PMCID: PMC8463097 DOI: 10.1121/10.0006416] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 08/27/2021] [Accepted: 09/01/2021] [Indexed: 05/31/2023]
Abstract
This study tested the hypotheses that (1) adolescents with cochlear implants (CIs) experience impaired spectral processing abilities, and (2) those impaired spectral processing abilities constrain acquisition of skills based on sensitivity to phonological structure but not those based on lexical or syntactic (lexicosyntactic) knowledge. To test these hypotheses, spectral modulation detection (SMD) thresholds were measured for 14-year-olds with normal hearing (NH) or CIs. Three measures each of phonological and lexicosyntactic skills were obtained and used to generate latent scores of each kind of skill. Relationships between SMD thresholds and both latent scores were assessed. Mean SMD threshold was poorer for adolescents with CIs than for adolescents with NH. Both latent lexicosyntactic and phonological scores were poorer for the adolescents with CIs, but the latent phonological score was disproportionately so. SMD thresholds were significantly associated with phonological but not lexicosyntactic skill for both groups. The only audiologic factor that also correlated with phonological latent scores for adolescents with CIs was the aided threshold, but it did not explain the observed relationship between SMD thresholds and phonological latent scores. Continued research is required to find ways of enhancing spectral processing for children with CIs to support their acquisition of phonological sensitivity.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Joanna H Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Donal G Sinex
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| |
Collapse
|
21
|
Nishimura T, Akasaka S, Morimoto C, Okayasu T, Kitahara T, Hosoi H. Speech recognition scores in bilateral and unilateral atretic ears. Int J Audiol 2021; 61:663-669. [PMID: 34370598 DOI: 10.1080/14992027.2021.1961169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE Congenital aural atresia causes severe conductive hearing loss disturbing auditory development. The differences in speech recognition were investigated between bilateral and unilateral aural atresia. DESIGN The maximum speech recognition scores (SRSs) were compared between patients with bilateral and unilateral aural atresia. In patients with unilateral aural atresia, the maximum SRSs were compared between the atretic and unaffected ears. Furthermore, the correct response rates for test material monosyllables were compared with those of patients with sensorineural hearing loss (SNHL), which had been previously obtained. STUDY SAMPLE Twenty-four patients with aural atresia (8 bilateral, and 16 unilateral) participated. RESULTS The maximum SRS in unilateral atretic ears (median: 72%) was significantly lower than that in unaffected ears (median: 89%) (p < 0.05) and in bilateral atretic ears (median: 91%) (p < 0.05). Patients with aural atresia had relatively high correct response rates for monosyllables with low correct response rates by patients with SNHL. Conversely, incorrect responses were obtained for several words for which high correct-response rates were attained by patients with SNHL. CONCLUSIONS Poor unilateral atretic-ear development may induce low speech recognition, and the mechanisms underlying speech-recognition reduction differ from those in SNHL.
Collapse
Affiliation(s)
- Tadashi Nishimura
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Sakie Akasaka
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Chihiro Morimoto
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Tadao Okayasu
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Tadashi Kitahara
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, Kashihara, Japan
| | - Hiroshi Hosoi
- MBT (Medicine-Based Town) Institute, Nara Medical University, Kashihara, Japan
| |
Collapse
|
22
|
Souza MRFD, Iorio MCM. Speech Intelligibility Index and the Ling 6(HL) test: correlations in pediatric hearing aid users. Codas 2021; 33:e20200094. [PMID: 34378761 DOI: 10.1590/2317-1782/20202020094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Accepted: 11/24/2020] [Indexed: 11/22/2022] Open
Abstract
PURPOSE To evaluate speech audibility in schoolchildren hearing aids users and correlate the Speech Intelligibility Index to phonemes detecion. METHODS 22 children and adolescents hearing aids users, underwent audiological evaluation, in situ verification (and consequent obtaining the Speech Intelligibility Index - SII - for conditions with and without hearing aids) and detection thresholds for phonemes by Ling-6 (HL) test. RESULTS The average value for the SII was 25.1 without hearing aids and 68.9 with amplification (p <0.001 *). The phoneme detection thresholds in free field, in dBHL, were, without amplification /m/ = 29.9, /u/ = 29.5, /a/ = 35.5, /i/ = 30.8, /∫/ = 44.2 e /s/ = 44.9, and with amplification /m/ = 13.0, /u/ = 11.5 /a/ = 14.3, /i/ = 15.4, /∫/ = 20.4 e /s/ = 23.1 (p<0.001*). There was a negative correlation between SII and the thresholds of all phonemes in the condition without hearing aids (p≤0.001*) and between SII and the /s/ threshold with hearing aids (p = 0.036*). CONCLUSION The detection thresholds for all phonemes are lower than without hearing aids. There is a negative correlation between SII and the thresholds of all phonemes in the situation without hearing aids and between SII and the detection threshold of the phoneme / s / in the situation with hearing aids.
Collapse
|
23
|
Yoon YS, Mills I, Toliver B, Park C, Whitaker G, Drew C. Comparisons in Frequency Difference Limens Between Sequential and Simultaneous Listening Conditions in Normal-Hearing Listeners. Am J Audiol 2021; 30:266-274. [PMID: 33769845 DOI: 10.1044/2021_aja-20-00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose We compared frequency difference limens (FDLs) in normal-hearing listeners under two listening conditions: sequential and simultaneous. Method Eighteen adult listeners participated in three experiments. FDL was measured using a method of limits for comparison frequency. In the sequential listening condition, the tones were presented with a half-second time interval in between, but for the simultaneous listening condition, the tones were presented simultaneously. For the first experiment, one of four reference tones (125, 250, 500, or 750 Hz), which was presented to the left ear, was paired with one of four starting comparison tones (250, 500, 750, or 1000 Hz), which was presented to the right ear. The second and third experiments had the same testing conditions as the first experiment except with two- and three-tone complexes, comparison tones. The subjects were asked if the tones sounded the same or different. When a subject chose "different," the comparison frequency decreased by 10% of the frequency difference between the reference and comparison tones. FDLs were determined when the subjects chose "same" 3 times in a row. Results FDLs were significantly broader (worse) with simultaneous listening than with sequential listening for the two- and three-tone complex conditions but not for the single-tone condition. The FDLs were narrowest (best) with the three-tone complex under both listening conditions. FDLs broadened as the testing frequencies increased for the single tone and the two-tone complex. The FDLs were not broadened at frequencies > 250 Hz for the three-tone complex. Conclusion The results suggest that sequential and simultaneous frequency discriminations are mediated by different processes at different stages in the auditory pathway for complex tones, but not for pure tones.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Ivy Mills
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - BaileyAnn Toliver
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Christine Park
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - George Whitaker
- Division of Otolaryngology, Baylor Scott & White Medical Center, Temple, TX
| | - Carrie Drew
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| |
Collapse
|
24
|
Aronoff JM, Duitsman L, Matusik DK, Hussain S, Lippmann E. Examining the Relationship Between Speech Recognition and a Spectral-Temporal Test With a Mixed Group of Hearing Aid and Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1073-1080. [PMID: 33719538 DOI: 10.1044/2020_jslhr-20-00352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Audiology clinics have a need for a nonlinguistic test for assessing speech scores for patients using hearing aids or cochlear implants. One such test, the Spectral-Temporally Modulated Ripple Test Lite for computeRless Measurement (SLRM), has been developed for use in clinics, but it, as well as the related Spectral-Temporally Modulated Ripple Test, has primarily been assessed with cochlear implant users. The main goal of this study was to examine the relationship between SLRM and the Arizona Biomedical Institute Sentence Test (AzBio) for a mixed group of hearing aid and cochlear implant users. Method Adult hearing aid users and cochlear implant users were tested with SLRM, AzBio in quiet, and AzBio in multitalker babble with a +8 dB signal-to-noise ratio. Results SLRM scores correlated with both AzBio recognition scores in quiet and in noise. Conclusions The results indicated that there is a significant relationship between SLRM and AzBio scores when testing a mixed group of cochlear implant and hearing aid users. This suggests that SLRM may be a useful nonlinguistic test for use with individuals with a variety of hearing devices.
Collapse
Affiliation(s)
- Justin M Aronoff
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Leah Duitsman
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Deanna K Matusik
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
| | - Senad Hussain
- Department of Medicine, College of Medicine, University of Illinois at Chicago
| | - Elise Lippmann
- Department of Otolaryngology, College of Medicine, University of Illinois at Chicago
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Massachusetts Eye and Ear, Boston
| |
Collapse
|
25
|
Cucis PA, Berger-Vachon C, Thaï-Van H, Hermann R, Gallego S, Truy E. Word Recognition and Frequency Selectivity in Cochlear Implant Simulation: Effect of Channel Interaction. J Clin Med 2021; 10:jcm10040679. [PMID: 33578696 PMCID: PMC7916371 DOI: 10.3390/jcm10040679] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022] Open
Abstract
In cochlear implants (CI), spread of neural excitation may produce channel interaction. Channel interaction disturbs the spectral resolution and, among other factors, seems to impair speech recognition, especially in noise. In this study, two tests were performed with 20 adult normal-hearing (NH) subjects under different vocoded simulations. First, there was a measurement of word recognition in noise while varying the number of selected channels (4, 8, 12 or 16 maxima out of 20) and the degree of simulated channel interaction (“Low”, “Medium” and “High”). Then, there was an evaluation of spectral resolution function of the degree of simulated channel interaction, reflected by the sharpness (Q10dB) of psychophysical tuning curves (PTCs). The results showed a significant effect of the simulated channel interaction on word recognition but did not find an effect of the number of selected channels. The intelligibility decreased significantly for the highest degree of channel interaction. Similarly, the highest simulated channel interaction impaired significantly the Q10dB. Additionally, a strong intra-individual correlation between frequency selectivity and word recognition in noise was observed. Lastly, the individual changes in frequency selectivity were positively correlated with the changes in word recognition when the degree of interaction went from “Low” to “High”. To conclude, the degradation seen for the highest degree of channel interaction suggests a threshold effect on frequency selectivity and word recognition. The correlation between frequency selectivity and intelligibility in noise supports the hypothesis that PTCs Q10dB can account for word recognition in certain conditions. Moreover, the individual variations of performances observed among subjects suggest that channel interaction does not have the same effect on each individual. Finally, these results highlight the importance of taking into account subjects’ individuality and to evaluate channel interaction through the speech processor.
Collapse
Affiliation(s)
- Pierre-Antoine Cucis
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
- Correspondence: ; Tel.: +33-472-110-0518
| | - Christian Berger-Vachon
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Brain Dynamics and Cognition Team (DYCOG), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France
- Biomechanics and Impact Mechanics Laboratory (LBMC), French Institute of Science and Technology for Transport, Development and Networks (IFSTTAR), Gustave Eiffel University, 69675 Bron, France
| | - Hung Thaï-Van
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Paris Hearing Institute, Institut Pasteur, Inserm U1120, 75015 Paris, France
- Department of Audiology and Otoneurological Evaluation, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| | - Ruben Hermann
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| | - Stéphane Gallego
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Neuronal Dynamics and Audition Team (DNA), Laboratory of Cognitive Neuroscience (LNSC), CNRS UMR 7291, Aix-Marseille University, CEDEX 3, 13331 Marseille, France
| | - Eric Truy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| |
Collapse
|
26
|
Abstract
Sequences of phonologically similar words are more difficult to remember than phonologically distinct sequences. This study investigated whether this difficulty arises in the acoustic similarity of auditory stimuli or in the corresponding phonological labels in memory. Participants reconstructed sequences of words which were degraded with a vocoder. We manipulated the phonological similarity of response options across two groups. One group was trained to map stimulus words onto phonologically similar response labels which matched the recorded word; the other group was trained to map words onto a set of plausible responses which were mismatched from the original recordings but were selected to have less phonological overlap. Participants trained on the matched responses were able to learn responses with less training and recall sequences more accurately than participants trained on the mismatched responses, even though the mismatched responses were more phonologically distinct from one another and participants were unaware of the mismatch. The relative difficulty of recalling items in the correct position was the same across both sets of response labels. Mismatched responses impaired recall accuracy across all positions except the final item in each list. These results are consistent with the idea that increased difficulty of mapping acoustic stimuli onto phonological forms impairs serial recall. Increased mapping difficulty could impair retention of memoranda and impede consolidation into phonological forms, which would impair recall in adverse listening conditions.
Collapse
Affiliation(s)
- Adam K Bosen
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Elizabeth Monzingo
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Angela M AuBuchon
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
27
|
Cabrera L, Halliday LF. Relationship between sensitivity to temporal fine structure and spoken language abilities in children with mild-to-moderate sensorineural hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3334. [PMID: 33261401 PMCID: PMC7613189 DOI: 10.1121/10.0002669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Children with sensorineural hearing loss show considerable variability in spoken language outcomes. The present study tested whether specific deficits in supra-threshold auditory perception might contribute to this variability. In a previous study by Halliday, Rosen, Tuomainen, and Calcus [(2019). J. Acoust. Soc. Am. 146, 4299], children with mild-to-moderate sensorineural hearing loss (MMHL) were shown to perform more poorly than those with normal hearing (NH) on measures designed to assess sensitivity to the temporal fine structure (TFS; the rapid oscillations in the amplitude of narrowband signals over short time intervals). However, they performed within normal limits on measures assessing sensitivity to the envelope (E; the slow fluctuations in the overall amplitude). Here, individual differences in unaided sensitivity to the TFS accounted for significant variance in the spoken language abilities of children with MMHL after controlling for nonverbal intelligence quotient, family history of language difficulties, and hearing loss severity. Aided sensitivity to the TFS and E cues was equally important for children with MMHL, whereas for children with NH, E cues were more important. These findings suggest that deficits in TFS perception may contribute to the variability in spoken language outcomes in children with sensorineural hearing loss.
Collapse
Affiliation(s)
- Laurianne Cabrera
- Integrative Neuroscience and Cognition Center, CNRS-Université de Paris, Paris, 75006, France
| | - Lorna F. Halliday
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
28
|
Parker MA. Identifying three otopathologies in humans. Hear Res 2020; 398:108079. [PMID: 33011456 DOI: 10.1016/j.heares.2020.108079] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 08/25/2020] [Accepted: 09/16/2020] [Indexed: 01/19/2023]
Abstract
OBJECTIVES Hearing-in-noise (HIN) is a primary complaint of both the hearing impaired and the hearing aid user. Both auditory nerve (AN) function and outer hair cell (OHC) function are thought to contribute to HIN, but their relative contributions are still being elucidated. OHCs play a critical role in HIN by fine tuning the motion of the basilar membrane. Further, animal studies suggest that cochlear (auditory) synaptopathy, which is the loss of synaptic contact between hair cells and the AN, may be another cause of HIN difficulty. While there is evidence that cochlear synaptopathy occurs in animal models, there is debate as to whether cochlear synaptopathy is clinically significant in humans, which may be due to disparate methods of measuring noise exposure in humans and our high variability in susceptibility to noise damage. Rather than use self-reported noise exposure to define synaptopathic groups, this paper assumes that the general population exhibits a range of noise exposures and resulting otopathologies and defines cochlear synaptopathy "operationally" as low CAP amplitude accompanied by normal DPOAE levels in persons with low pure tone averages. The first question is whether the standard audiogram detects AN dysfunction and OHC dysfunction? The second question is whether HIN performance is primarily dependent on AN function, OHC function, or both functions? DESIGN Adult subjects have been recruited to participate in an ongoing study and variables such as age, self-reported gender, pure tone audiometry (0.25-20 kHz), subjective perception of HIN difficulty, Quick Speech-in Noise (QuickSIN) test, 45% time compressed word recognition (WR) in 10% reverberation and WR in the presence of ipsilateral speech-weighted noise have been collected. These variables were correlated with OHC function measured by distortion-product otoacoustic emission (DPOAE) signal to-noise-ratio (SNR), and AN function measured by compound action potential (CAP) peak amplitude and ratio to summating potential measured using electrocochleography. RESULTS Synaptopathy, by this operational definition, may be present in as many as 30% of individuals with normal hearing. Persons hearing within normal limits may exhibit HIN difficulties, and persons with hearing within normal limits may exhibit two distinct types of otopathologies undetected by the standard audiogram (a.k.a. hidden hearing loss) namely operational cochlear synaptopathy and OHC dysfunction. AN untuning secondary to OHC dysfunction is a third otopathology that occurs in subjects with a Mild-Moderate sensorineural hearing loss (SNHL). Clinical norms for each of these otopathologies are presented. Finally, the data show that operational cochlear synaptopathy does not correlate with HIN dysfunction. Rather, HIN performance is primarily governed by OHC function, while AN untuning also plays a lesser but statistically significant role. CONCLUSIONS The results of this study suggest the following: (1) persons hearing within normal limits may exhibit HIN difficulties; (2) persons hearing within normal limits may exhibit undetected otopathologies, namely AN dysfunction and OHC dysfunction; (3) AN untuning secondary to OHC dysfunction occurs in subjects with Mild-Moderate SNHL; (4) HIN performance is primarily governed by OHC function rather than AN function.
Collapse
Affiliation(s)
- Mark A Parker
- Department of Otolaryngology-Head and Neck Surgery, Steward St. Elizabeth's Medical Center, 736 Cambridge St., SMC-8, Brighton, MA 02135, United States; Tufts University School of Medicine, Boston MA, United States.
| |
Collapse
|
29
|
Lelo de Larrea-Mancera ES, Stavropoulos T, Hoover EC, Eddins DA, Gallun FJ, Seitz AR. Portable Automated Rapid Testing (PART) for auditory assessment: Validation in a young adult normal-hearing population. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:1831. [PMID: 33138479 PMCID: PMC7541091 DOI: 10.1121/10.0002108] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 09/14/2020] [Accepted: 09/16/2020] [Indexed: 05/23/2023]
Abstract
This study aims to determine the degree to which Portable Automated Rapid Testing (PART), a freely available program running on a tablet computer, is capable of reproducing standard laboratory results. Undergraduate students were assigned to one of three within-subject conditions that examined repeatability of performance on a battery of psychoacoustical tests of temporal fine structure processing, spectro-temporal amplitude modulation, and targets in competition. The repeatability condition examined test/retest with the same system, the headphones condition examined the effects of varying headphones (passive and active noise-attenuating), and the noise condition examined repeatability in the presence of recorded cafeteria noise. In general, performance on the test battery showed high repeatability, even across manipulated conditions, and was similar to that reported in the literature. These data serve as validation that suprathreshold psychoacoustical tests can be made accessible to run on consumer-grade hardware and perform in less controlled settings. This dataset also provides a distribution of thresholds that can be used as a normative baseline against which auditory dysfunction can be identified in future work.
Collapse
Affiliation(s)
| | - Trevor Stavropoulos
- Brain Game Center, University of California Riverside, 1201 University Avenue, Riverside California 92521, USA
| | - Eric C Hoover
- University of Maryland, College Park, Maryland 20742, USA
| | | | | | - Aaron R Seitz
- Psychology Department, University of California, Riverside, 900 University Avenue, Riverside, California 92521, USA
| |
Collapse
|
30
|
Assessing the Quality of Low-Frequency Acoustic Hearing: Implications for Combined Electroacoustic Stimulation With Cochlear Implants. Ear Hear 2020; 42:475-486. [PMID: 32976249 DOI: 10.1097/aud.0000000000000949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES There are many potential advantages to combined electric and acoustic stimulation (EAS) with a cochlear implant (CI), including benefits for hearing in noise, localization, frequency selectivity, and music enjoyment. However, performance on these outcome measures is variable, and the residual acoustic hearing may not be beneficial for all patients. As such, we propose a measure of spectral resolution that might be more predictive of the usefulness of the residual hearing than the audiogram alone. In the following experiments, we measured performance on spectral resolution and speech perception tasks in individuals with normal hearing (NH) using low-pass filters to simulate steeply sloping audiograms of typical EAS candidates and compared it with performance on these tasks for individuals with sensorineural hearing loss with similar audiometric configurations. Because listeners with NH had similar levels of audibility and bandwidth to listeners with hearing loss, differences between the groups could be attributed to distortions due to hearing loss. DESIGN Listeners with NH (n = 12) and those with hearing loss (n = 23) with steeply sloping audiograms participated in this study. The group with hearing loss consisted of 7 EAS users, 14 hearing aid users, and 3 who did not use amplification in the test ear. Spectral resolution was measured with the spectral-temporal modulated ripple test (SMRT), and speech perception was measured with AzBio sentences in quiet and noise. Listeners with NH listened to stimuli through low-pass filters and at two levels (40 and 60 dBA) to simulate low and high audibility. Listeners with hearing loss listened to SMRT stimuli unaided at their most comfortable listening level and speech stimuli at 60 dBA. RESULTS Results suggest that performance with SMRT is significantly worse for listeners with hearing loss than for listeners with NH and is not related to audibility. Performance on the speech perception task declined with decreasing frequency information for both listeners with NH and hearing loss. Significant correlations were observed between speech perception, SMRT scores, and mid-frequency audiometric thresholds for listeners with hearing loss. CONCLUSIONS NH simulations describe a "best case scenario" for hearing loss where audibility is the only deficit. For listeners with hearing loss, the likely broadening of auditory filters, loss of cochlear nonlinearities, and possible cochlear dead regions may have contributed to distorted spectral resolution and thus deviations from the NH simulations. Measures of spectral resolution may capture an aspect of hearing loss not evident from the audiogram and be a useful tool for assessing the contributions of residual hearing post-cochlear implantation.
Collapse
|
31
|
Jorgensen EJ, McCreery RW, Kirby BJ, Brennan M. Effect of level on spectral-ripple detection threshold for listeners with normal hearing and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:908. [PMID: 32873021 PMCID: PMC7443170 DOI: 10.1121/10.0001706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 07/07/2020] [Accepted: 07/20/2020] [Indexed: 06/11/2023]
Abstract
This study investigated the effect of presentation level on spectral-ripple detection for listeners with and without sensorineural hearing loss (SNHL). Participants were 25 listeners with normal hearing and 25 listeners with SNHL. Spectral-ripple detection thresholds (SRDTs) were estimated at three spectral densities (0.5, 2, and 4 ripples per octave, RPO) and three to four sensation levels (SLs) (10, 20, 40, and, when possible, 60 dB SL). Each participant was also tested at 90 dB sound pressure level (SPL). Results indicate that level affected SRDTs. However, the effect of level depended on ripple density and hearing status. For all listeners and all RPO conditions, SRDTs improved from 10 to 40 dB SL. In the 2- and 4-RPO conditions, SRDTs became poorer from the 40 dB SL to the 90 dB SPL condition. The results suggest that audibility likely controls spectral-ripple detection at low SLs for all ripple densities, whereas spectral resolution likely controls spectral-ripple detection at high SLs and ripple densities. For optimal ripple detection across all listeners, clinicians and researchers should use a SL of 40 dB SL. To avoid absolute-level confounds, a presentation level of 80 dB SPL can also be used.
Collapse
Affiliation(s)
- Erik J Jorgensen
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Omaha, Nebraska 68124, USA
| | - Benjamin J Kirby
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas 76203, USA
| | - Marc Brennan
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA
| |
Collapse
|
32
|
Tejani VD, Brown CJ. Speech masking release in Hybrid cochlear implant users: Roles of spectral and temporal cues in electric-acoustic hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3667. [PMID: 32486815 PMCID: PMC7255813 DOI: 10.1121/10.0001304] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 05/05/2020] [Accepted: 05/05/2020] [Indexed: 06/04/2023]
Abstract
When compared with cochlear implant (CI) users utilizing electric-only (E-Only) stimulation, CI users utilizing electric-acoustic stimulation (EAS) in the implanted ear show improved speech recognition in modulated noise relative to steady-state noise (i.e., speech masking release). It has been hypothesized, but not shown, that masking release is attributed to spectral resolution and temporal fine structure (TFS) provided by acoustic hearing. To address this question, speech masking release, spectral ripple density discrimination thresholds, and fundamental frequency difference limens (f0DLs) were evaluated in the acoustic-only (A-Only), E-Only, and EAS listening modes in EAS CI users. The spectral ripple and f0DL tasks are thought to reflect access to spectral and TFS cues, which could impact speech masking release. Performance in all three measures was poorest when EAS CI users were tested using the E-Only listening mode, with significant improvements in A-Only and EAS listening modes. f0DLs, but not spectral ripple density discrimination thresholds, significantly correlated with speech masking release when assessed in the EAS listening mode. Additionally, speech masking release correlated with AzBio sentence recognition in noise. The correlation between speech masking release and f0DLs likely indicates that TFS cues provided by residual hearing were used to obtain speech masking release, which aided sentence recognition in noise.
Collapse
Affiliation(s)
- Viral D Tejani
- Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, 21003 Pomerantz Family Pavilion, Iowa City, Iowa 52242-1078, USA
| | - Carolyn J Brown
- Communication Sciences and Disorders, Wendell Johnson Speech and Hearing Center-127B, University of Iowa, 250 Hawkins Drive, Iowa City, Iowa 52242, USA
| |
Collapse
|
33
|
Souza P, Arehart K, Schoof T, Anderson M, Strori D, Balmert L. Understanding Variability in Individual Response to Hearing Aid Signal Processing in Wearable Hearing Aids. Ear Hear 2020; 40:1280-1292. [PMID: 30998547 PMCID: PMC6786927 DOI: 10.1097/aud.0000000000000717] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVES Previous work has suggested that individual characteristics, including amount of hearing loss, age, and working memory ability, may affect response to hearing aid signal processing. The present study aims to extend work using metrics to quantify cumulative signal modifications under simulated conditions to real hearing aids worn in everyday listening environments. Specifically, the goal was to determine whether individual factors such as working memory, age, and degree of hearing loss play a role in explaining how listeners respond to signal modifications caused by signal processing in real hearing aids, worn in the listener's everyday environment, over a period of time. DESIGN Participants were older adults (age range 54-90 years) with symmetrical mild-to-moderate sensorineural hearing loss. We contrasted two distinct hearing aid fittings: one designated as mild signal processing and one as strong signal processing. Forty-nine older adults were enrolled in the study and 35 participants had valid outcome data for both hearing aid fittings. The difference between the two settings related to the wide dynamic range compression and frequency compression features. Order of fittings was randomly assigned for each participant. Each fitting was worn in the listener's everyday environments for approximately 5 weeks before outcome measurements. The trial was double blind, with neither the participant nor the tester aware of the specific fitting at the time of the outcome testing. Baseline measures included a full audiometric evaluation as well as working memory and spectral and temporal resolution. The outcome was aided speech recognition in noise. RESULTS The two hearing aid fittings resulted in different amounts of signal modification, with significantly less modification for the mild signal processing fitting. The effect of signal processing on speech intelligibility depended on an individual's age, working memory capacity, and degree of hearing loss. Speech recognition with the strong signal processing decreased with increasing age. Working memory interacted with signal processing, with individuals with lower working memory demonstrating low speech intelligibility in noise with both processing conditions, and individuals with higher working memory demonstrating better speech intelligibility in noise with the mild signal processing fitting. Amount of hearing loss interacted with signal processing, but the effects were small. Individual spectral and temporal resolution did not contribute significantly to the variance in the speech intelligibility score. CONCLUSIONS When the consequences of a specific set of hearing aid signal processing characteristics were quantified in terms of overall signal modification, there was a relationship between participant characteristics and recognition of speech at different levels of signal modification. Because the hearing aid fittings used were constrained to specific fitting parameters that represent the extremes of the signal modification that might occur in clinical fittings, future work should focus on similar relationships with more diverse types of signal processing parameters.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, Illinois, USA
| | - Kathryn Arehart
- Department of Speech Language Hearing Sciences, University of Colorado at Boulder
| | - Tim Schoof
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London
| | - Melinda Anderson
- Department of Otolaryngology, University of Colorado School of Medicine
| | - Dorina Strori
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, USA
- Department of Linguistics, Northwestern University, Evanston, Illinois, USA
| | - Lauren Balmert
- Department of Preventive Medicine, Biostatistics Collaboration Center, Feinberg School of Medicine, Northwestern University
| |
Collapse
|
34
|
Resnick JM, Horn DL, Noble AR, Rubinstein JT. Spectral aliasing in an acoustic spectral ripple discrimination task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1054. [PMID: 32113324 PMCID: PMC7112708 DOI: 10.1121/10.0000608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Spectral ripple discrimination tasks are commonly used to probe spectral resolution in cochlear implant (CI), normal-hearing (NH), and hearing-impaired individuals. In addition, these tasks have also been used to examine spectral resolution development in NH and CI children. In this work, stimulus sine-wave carrier density was identified as a critical variable in an example spectral ripple-based task, the Spectro-Temporally Modulated Ripple (SMR) Test, and it was demonstrated that previous uses of it in NH listeners sometimes used values insufficient to represent relevant ripple densities. Insufficient carry densities produced spectral under-sampling that both eliminated ripple cues at high ripple densities and introduced unintended structured interference between the carriers and intended ripples at particular ripple densities. It was found that this effect produced non-monotonic psychometric functions for NH listeners that would cause systematic underestimation of thresholds with adaptive techniques. Studies of spectral ripple detection in CI users probe a density regime below where this source of aliasing occurs, as CI signal processing limits dense ripple representation. While these analyses and experiments focused on the SMR Test, any task in which discrete pure-tone carriers spanning frequency space are modulated to approximate a desired pattern must be designed with the consideration of the described spectral aliasing effect.
Collapse
Affiliation(s)
- Jesse M Resnick
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - David L Horn
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Anisha R Noble
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Jay T Rubinstein
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| |
Collapse
|
35
|
Souza P, Gallun F, Wright R. Contributions to Speech-Cue Weighting in Older Adults With Impaired Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:334-344. [PMID: 31940258 PMCID: PMC7213489 DOI: 10.1044/2019_jslhr-19-00176] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Purpose In a previous paper (Souza, Wright, Blackburn, Tatman, & Gallun, 2015), we explored the extent to which individuals with sensorineural hearing loss used different cues for speech identification when multiple cues were available. Specifically, some listeners placed the greatest weight on spectral cues (spectral shape and/or formant transition), whereas others relied on the temporal envelope. In the current study, we aimed to determine whether listeners who relied on temporal envelope did so because they were unable to discriminate the formant information at a level sufficient to use it for identification and the extent to which a brief discrimination test could predict cue weighting patterns. Method Participants were 30 older adults with bilateral sensorineural hearing loss. The first task was to label synthetic speech tokens based on the combined percept of temporal envelope rise time and formant transitions. An individual profile was derived from linear discriminant analysis of the identification responses. The second task was to discriminate differences in either temporal envelope rise time or formant transitions. The third task was to discriminate spectrotemporal modulation in a nonspeech stimulus. Results All listeners were able to discriminate temporal envelope rise time at levels sufficient for the identification task. There was wide variability in the ability to discriminate formant transitions, and that ability predicted approximately one third of the variance in the identification task. There was no relationship between performance in the identification task and either amount of hearing loss or ability to discriminate nonspeech spectrotemporal modulation. Conclusions The data suggest that listeners who rely to a greater extent on temporal cues lack the ability to discriminate fine-grained spectral information. The fact that the amount of hearing loss was not associated with the cue profile underscores the need to characterize individual abilities in a more nuanced way than can be captured by the pure-tone audiogram.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, IL
| | - Frederick Gallun
- Rehabilitation Research and Development National Center for Rehabilitative Auditory Research, VA Portland Health Care System and Oregon Health and Sciences University
| | - Richard Wright
- Department of Linguistics, University of Washington, Seattle
| |
Collapse
|
36
|
Kirby BJ, Spratford M, Klein KE, McCreery RW. Cognitive Abilities Contribute to Spectro-Temporal Discrimination in Children Who Are Hard of Hearing. Ear Hear 2019; 40:645-650. [PMID: 30130295 DOI: 10.1097/aud.0000000000000645] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Spectral ripple discrimination tasks have received considerable interest as potential clinical tools for use with adults and children with hearing loss. Previous results have indicated that performance on ripple tasks is affected by differences in aided audibility [quantified using the Speech Intelligibility Index, or Speech Intelligibility Index (SII)] in children who wear hearing aids and that ripple thresholds tend to improve over time in children with and without hearing loss. Although ripple task performance is thought to depend less on language skills than common speech perception tasks, the extent to which spectral ripple discrimination might depend on other general cognitive abilities such as nonverbal intelligence and working memory is unclear. This is an important consideration for children because age-related changes in ripple test results could be due to developing cognitive ability and could obscure the effect of any changes in unaided or aided hearing over time. The purpose of this study was to establish the relationship between spectral ripple discrimination in a group of children who use hearing aids and general cognitive abilities such as nonverbal intelligence, visual and auditory working memory, and executive function. It was hypothesized that, after controlling for listener age, general cognitive ability would be associated with spectral ripple thresholds and performance on both auditory and visual cognitive tasks would be associated with spectral ripple thresholds. DESIGN Children who were full-time users of hearing aids for at least 1 year (n = 24, ages 6 to 13 years) participated in this study. Children completed a spectro-temporal modulated ripple discrimination task in the sound field using their personal hearing aids. Threshold was determined from the average of two repetitions of the task. Participants completed standard measurements of executive function, nonverbal intelligence, and visual and verbal working memory. Real ear verification measures were completed for each child with their personal hearing aids to determine aided SII. RESULTS Consistent with past findings, spectro-temporal ripple thresholds improved with greater listener age. Surprisingly, aided SII was not significantly correlated with spectro-temporal ripple thresholds potentially because this particular group of listeners had overall better hearing and greater aided SII than participants in previous studies. Partial correlations controlling for listener age revealed that greater nonverbal intelligence and visual working memory were associated with better spectro-temporal ripple discrimination thresholds. Verbal working memory, executive function, and language ability were not significantly correlated with spectro-temporal ripple discrimination thresholds. CONCLUSIONS These results indicate that greater general cognitive abilities are associated with better spectro-temporal ripple discrimination ability, independent of children's age or aided SII. It is possible that these relationships reflect the cognitive demands of the psychophysical task rather than a direct relationship of cognitive ability to spectro-temporal processing in the auditory system. Further work is needed to determine the relationships of cognitive abilities to ripple discrimination in other populations, such as children with cochlear implants or with a wider range of aided SII.
Collapse
Affiliation(s)
- Benjamin J Kirby
- Department of Communication Sciences and Disorders, Illinois State University, Normal, Illinois, USA
| | | | - Kelsey E Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | | |
Collapse
|
37
|
Halliday LF, Rosen S, Tuomainen O, Calcus A. Impaired frequency selectivity and sensitivity to temporal fine structure, but not envelope cues, in children with mild-to-moderate sensorineural hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4299. [PMID: 31893709 DOI: 10.1121/1.5134059] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 10/24/2019] [Indexed: 06/10/2023]
Abstract
Psychophysical thresholds were measured for 8-16 year-old children with mild-to-moderate sensorineural hearing loss (MMHL; N = 46) on a battery of auditory processing tasks that included measures designed to be dependent upon frequency selectivity and sensitivity to temporal fine structure (TFS) or envelope cues. Children with MMHL who wore hearing aids were tested in both unaided and aided conditions, and all were compared to a group of normally hearing (NH) age-matched controls. Children with MMHL performed more poorly than NH controls on tasks considered to be dependent upon frequency selectivity, sensitivity to TFS, and speech discrimination (/bɑ/-/dɑ/), but not on tasks measuring sensitivity to envelope cues. Auditory processing deficits remained regardless of age, were observed in both unaided and aided conditions, and could not be attributed to differences in nonverbal IQ or attention between groups. However, better auditory processing in children with MMHL was predicted by better audiometric thresholds and, for aided tasks only, higher levels of maternal education. These results suggest that, as for adults with MMHL, children with MMHL may show deficits in frequency selectivity and sensitivity to TFS, but sensitivity to the envelope may remain intact.
Collapse
Affiliation(s)
- Lorna F Halliday
- Speech, Hearing, and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Stuart Rosen
- Speech, Hearing, and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Outi Tuomainen
- Speech, Hearing, and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Axelle Calcus
- Speech, Hearing, and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| |
Collapse
|
38
|
Souza P, Hoover E, Blackburn M, Gallun F. The Characteristics of Adults with Severe Hearing Loss. J Am Acad Audiol 2019; 29:764-779. [PMID: 30222545 DOI: 10.3766/jaaa.17050] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Severe hearing loss impairs communication in a wide range of listening environments. However, we lack data as to the specific objective and subjective abilities of listeners with severe hearing loss. Insight into those abilities may inform treatment choices. PURPOSE The primary goal was to describe the audiometric profiles, spectral resolution ability, and objective and subjective speech perception of a sample of adult listeners with severe hearing loss, and to consider the relationships among those measures. We also considered the typical fitting received by individuals with severe loss, in terms of hearing aid style, electroacoustic characteristics, and features, as well as supplementary device use. RESEARCH DESIGN A within-subjects design was used. STUDY SAMPLE Participants included 36 adults aged 54-93 yr with unilateral or bilateral severe hearing loss. DATA COLLECTION AND ANALYSIS Testing included a full hearing and hearing aid history; audiometric evaluation; loudness growth and dynamic range; spectral resolution; assessment of cochlear dead regions; objective and subjective assessment of speech recognition; and electroacoustic evaluation of current hearing aids. Regression models were used to analyze relationships between hearing loss, spectral resolution, and speech recognition. RESULTS For speech in quiet, 60% of the variance was approximately equally accounted for by amount of hearing loss, spectral resolution, and number of dead regions. For speech in noise, only a modest proportion of performance variance was explained by amount of hearing loss. In general, participants were wearing amplification of appropriate style and technology for their hearing loss, but the extent of assistive technology use was low. Subjective communication ratings depended on the listening situation, but in general, were similar to previously published data for adults with mild-to-moderate loss who did not wear hearing aids. CONCLUSIONS The present data suggest that the range of abilities of an individual can be more fully captured with comprehensive testing. Such testing also offers an opportunity for informed counseling regarding realistic expectations for hearing aid use and the availability of hearing assistive technology.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, IL
| | - Eric Hoover
- Auditory & Speech Sciences Laboratory, University of South Florida, Tampa, FL
| | | | - Frederick Gallun
- National Center for Rehabilitative Auditory Research, Portland VA Medical Center and Oregon Health Sciences University, Portland, OR
| |
Collapse
|
39
|
Jeddi Z, Lotfi Y, Moossavi A, Bakhshi E, Hashemi SB. Correlation between Auditory Spectral Resolution and Speech Perception in Children with Cochlear Implants. IRANIAN JOURNAL OF MEDICAL SCIENCES 2019; 44:382-389. [PMID: 31582862 PMCID: PMC6754529 DOI: 10.30476/ijms.2019.44967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Background: Variability in speech performance is a major concern for children with cochlear implants (CIs). Spectral resolution is an important acoustic component in speech perception. Considerable variability and limitations of spectral resolution in children with CIs may lead to individual differences in speech performance. The aim of this study was to assess the correlation between auditory spectral resolution and speech perception in pediatric CI users.
Methods: This cross-sectional study was conducted in Shiraz, Iran, in 2017. The frequency discrimination threshold (FDT) and the spectral-temporal modulated ripple discrimination threshold (SMRT) were measured for 75 pre-lingual hearing-impaired children with CIs (age=8-12 y). Word recognition and sentence perception tests were completed to assess speech perception. The Pearson correlation analysis and multiple linear regression analysis were used to determine the correlation between the variables and to determine the predictive variables of speech perception, respectively.
Results: There was a significant correlation between the SMRT and word recognition (r=0.573 and P<0.001). The FDT was significantly correlated with word recognition (r=0.487 and P<0.001). Sentence perception had a significant correlation with the SMRT and the FDT. There was a significant correlation between chronological age and age at implantation with SMRT but not the FDT.
Conclusion: Auditory spectral resolution correlated well with speech perception among our children with CIs. Spectral resolution ability accounted for approximately 40% of the variance in speech perception among the children with CIs.
Collapse
Affiliation(s)
- Zahra Jeddi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Younes Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Abdollah Moossavi
- Department of Otolaryngology and Head and Neck Surgery, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Seyed Basir Hashemi
- Department of Otolaryngology, Khalili Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
40
|
Jensen KK, Bernstein JGW. The fluctuating masker benefit for normal-hearing and hearing-impaired listeners with equal audibility at a fixed signal-to-noise ratio. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2113. [PMID: 31046298 PMCID: PMC6472958 DOI: 10.1121/1.5096641] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Normal-hearing (NH) listeners can extract and integrate speech fragments from momentary dips in the level of a fluctuating masker, yielding a fluctuating-masker benefit (FMB) for speech understanding relative to a stationary-noise masker. Hearing-impaired (HI) listeners generally show less FMB, suggesting a dip-listening deficit attributable to suprathreshold spectral or temporal distortion. However, reduced FMB might instead result from different test signal-to-noise ratios (SNRs), reduced absolute audibility of otherwise unmasked speech segments, or age differences. This study examined the FMB for nine age-matched NH-HI listener pairs, while simultaneously equalizing audibility, SNR, and percentage-correct performance in stationary noise. Nonsense syllables were masked by stationary noise, 4- or 32-Hz sinusoidally amplitude-modulated noise (SAMN), or an opposite-gender interfering talker. Stationary-noise performance was equalized by adjusting the response-set size. Audibility was equalized by removing stimulus components falling below the HI absolute threshold. HI listeners showed a clear 4.5-dB reduction in FMB for 32-Hz SAMN, a similar FMB to NH listeners for 4-Hz SAMN, and a non-significant trend toward a 2-dB reduction in FMB for an interfering talker. These results suggest that HI listeners do not exhibit a general dip-listening deficit for all fluctuating maskers, but rather a specific temporal-resolution deficit affecting performance for high-rate modulated maskers.
Collapse
Affiliation(s)
- Kenneth Kragh Jensen
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
41
|
Holder JT, Reynolds SM, Sunderhaus LW, Gifford RH. Current Profile of Adults Presenting for Preoperative Cochlear Implant Evaluation. Trends Hear 2019; 22:2331216518755288. [PMID: 29441835 PMCID: PMC6027468 DOI: 10.1177/2331216518755288] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Considerable advancements in cochlear implant technology (e.g., electric acoustic stimulation) and assessment materials have yielded expanded criteria. Despite this, it is unclear whether individuals with better audiometric thresholds and speech understanding are being referred for cochlear implant workup and pursuing cochlear implantation. The purpose of this study was to characterize the mean auditory and demographic profile of adults presenting for preoperative cochlear implant workup. Data were collected prospectively for all adult preoperative workups at Vanderbilt from 2013 to 2015. Subjects included 287 adults (253 postlingually deafened) with a mean age of 62.3 years. Each individual was assessed using the minimum speech test battery, spectral modulation detection, subjective questionnaires, and cognitive screening. Mean consonant-nucleus-consonant word scores, AzBio sentence scores, and pure-tone averages for postlingually deafened adults were 10%, 13%, and 89 dB HL, respectively, for the ear to be implanted. Seventy-three individuals (25.4%) met labeled indications for Hybrid-L and 207 individuals (72.1%) had aidable hearing in the better hearing ear to be used in a bimodal hearing configuration. These results suggest that mean speech understanding evaluated at cochlear implant workup remains very low despite recent advancements. Greater awareness and insurance accessibility may be needed to make cochlear implant technology available to those who qualify for electric acoustic stimulation devices as well as individuals meeting conventional cochlear implant criteria.
Collapse
Affiliation(s)
- Jourdan T Holder
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Susan M Reynolds
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Linsey W Sunderhaus
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- 1 Department of Hearing and Speech Science, Vanderbilt Bill Wilkerson Center, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Advanced Bionics, Valencia, CA, USA.,3 Cochlear Americas, Englewood, CO, USA.,4 Frequency Therapeutics, Woburn, MA, USA
| |
Collapse
|
42
|
Gifford RH, Noble JH, Camarata SM, Sunderhaus LW, Dwyer RT, Dawant BM, Dietrich MS, Labadie RF. The Relationship Between Spectral Modulation Detection and Speech Recognition: Adult Versus Pediatric Cochlear Implant Recipients. Trends Hear 2019; 22:2331216518771176. [PMID: 29716437 PMCID: PMC5949922 DOI: 10.1177/2331216518771176] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Adult cochlear implant (CI) recipients demonstrate a reliable relationship between spectral modulation detection and speech understanding. Prior studies documenting this relationship have focused on postlingually deafened adult CI recipients—leaving an open question regarding the relationship between spectral resolution and speech understanding for adults and children with prelingual onset of deafness. Here, we report CI performance on the measures of speech recognition and spectral modulation detection for 578 CI recipients including 477 postlingual adults, 65 prelingual adults, and 36 prelingual pediatric CI users. The results demonstrated a significant correlation between spectral modulation detection and various measures of speech understanding for 542 adult CI recipients. For 36 pediatric CI recipients, however, there was no significant correlation between spectral modulation detection and speech understanding in quiet or in noise nor was spectral modulation detection significantly correlated with listener age or age at implantation. These findings suggest that pediatric CI recipients might not depend upon spectral resolution for speech understanding in the same manner as adult CI recipients. It is possible that pediatric CI users are making use of different cues, such as those contained within the temporal envelope, to achieve high levels of speech understanding. Further investigation is warranted to investigate the relationship between spectral and temporal resolution and speech recognition to describe the underlying mechanisms driving peripheral auditory processing in pediatric CI users.
Collapse
Affiliation(s)
- René H Gifford
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jack H Noble
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Stephen M Camarata
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Linsey W Sunderhaus
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert T Dwyer
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Benoit M Dawant
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Mary S Dietrich
- 4 Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert F Labadie
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
43
|
Shen Y, Kern AB, Richards VM. Toward Routine Assessments of Auditory Filter Shape. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:442-455. [PMID: 30950687 PMCID: PMC6436893 DOI: 10.1044/2018_jslhr-h-18-0092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 06/26/2018] [Accepted: 10/11/2018] [Indexed: 06/09/2023]
Abstract
Purpose A Bayesian adaptive procedure, that is, the quick auditory filter (qAF) procedure, has been shown to improve the efficiency for estimating auditory filter shapes of listeners with normal hearing. The current study evaluates the accuracy and test-retest reliability of the qAF procedure for naïve listeners with a variety of ages and hearing status. Method Fifty listeners who were naïve to psychophysical experiments and exhibit wide ranges of age (19-70 years) and hearing threshold (-5 to 70 dB HL at 2 kHz) were recruited. Their auditory filter shapes were estimated for a 15-dB SL target tone at 2 kHz using both the qAF procedure and the traditional threshold-based procedure. The auditory filter model was defined using 3 parameters: (a) the sharpness of the tip portion of the auditory filter, p; (b) the prominence of the low-frequency tail of the filter, 10log( w); and (c) the listener's efficiency in detection, 10log( K). Results The estimated parameters of the auditory filter model were consistent between 2 qAF runs tested on 2 separate days. The parameter estimates from the 2 qAF runs also agreed well with those estimated using the traditional procedure despite being substantially faster. Across the 3 auditory filter estimates, the dependence of the auditory filter parameters on listener age and hearing threshold was consistent across procedures, as well as consistent with previously published estimates. Conclusions The qAF procedure demonstrates satisfactory test-retest reliability and good agreement to the traditional procedure for listeners with a wide range of ages and with hearing status ranging from normal hearing to moderate hearing impairment.
Collapse
Affiliation(s)
- Yi Shen
- Department of Speech and Hearing Sciences, Indiana University Bloomington
| | - Allison B. Kern
- Department of Speech and Hearing Sciences, Indiana University Bloomington
| | | |
Collapse
|
44
|
Yoho SE, Bosen AK. Individualized frequency importance functions for listeners with sensorineural hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:822. [PMID: 30823788 PMCID: PMC6375730 DOI: 10.1121/1.5090495] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
The Speech Intelligibility Index includes a series of frequency importance functions for calculating the estimated intelligibility of speech under various conditions. Until recently, techniques to derive frequency importance required averaging data over a group of listeners, thus hindering the ability to observe individual differences due to factors such as hearing loss. In the current study, the "random combination strategy" [Bosen and Chatterjee (2016). J. Acoust. Soc. Am. 140, 3718-3727] was used to derive frequency importance functions for individual hearing-impaired listeners, and normal-hearing participants for comparison. Functions were measured by filtering sentences to contain only random subsets of frequency bands on each trial, and regressing speech recognition against the presence or absence of bands across trials. Results show that the contribution of each band to speech recognition was inversely proportional to audiometric threshold in that frequency region, likely due to reduced audibility, even though stimuli were shaped to compensate for each individual's hearing loss. The results presented in this paper demonstrate that this method is sensitive to factors that alter the shape of frequency importance functions within individuals with hearing loss, which could be used to characterize the impact of audibility or other factors related to suprathreshold deficits or hearing aid processing strategies.
Collapse
Affiliation(s)
- Sarah E Yoho
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Adam K Bosen
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| |
Collapse
|
45
|
Miller CW, Bernstein JGW, Zhang X, Wu YH, Bentler RA, Tremblay K. The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:3113-3126. [PMID: 30515519 PMCID: PMC6440313 DOI: 10.1044/2018_jslhr-h-17-0373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 06/06/2018] [Accepted: 08/04/2018] [Indexed: 05/26/2023]
Abstract
PURPOSE This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes. METHOD A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory-visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise. RESULTS A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding. CONCLUSIONS The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Xuyang Zhang
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
46
|
Souza P, Hoover E. The Physiologic and Psychophysical Consequences of Severe-to-Profound Hearing Loss. Semin Hear 2018; 39:349-363. [PMID: 30443103 DOI: 10.1055/s-0038-1670698] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Substantial loss of cochlear function is required to elevate pure-tone thresholds to the severe hearing loss range; yet, individuals with severe or profound hearing loss continue to rely on hearing for communication. Despite the impairment, sufficient information is encoded at the periphery to make acoustic hearing a viable option. However, the probability of significant cochlear and/or neural damage associated with the loss has consequences for sound perception and speech recognition. These consequences include degraded frequency selectivity, which can be assessed with tests including psychoacoustic tuning curves and broadband rippled stimuli. Because speech recognition depends on the ability to resolve frequency detail, a listener with severe hearing loss is likely to have impaired communication in both quiet and noisy environments. However, the extent of the impairment varies widely among individuals. A better understanding of the fundamental abilities of listeners with severe and profound hearing loss and the consequences of those abilities for communication can support directed treatment options in this population.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois
| | - Eric Hoover
- Department of Hearing and Speech Sciences, University of Maryland, Baltimore, Maryland
| |
Collapse
|
47
|
The effect of presentation level on spectrotemporal modulation detection. Hear Res 2018; 371:11-18. [PMID: 30439570 DOI: 10.1016/j.heares.2018.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 10/23/2018] [Accepted: 10/29/2018] [Indexed: 11/24/2022]
Abstract
The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.
Collapse
|
48
|
Günel B, Thiel CM, Hildebrandt KJ. Effects of Exogenous Auditory Attention on Temporal and Spectral Resolution. Front Psychol 2018; 9:1984. [PMID: 30405479 PMCID: PMC6206225 DOI: 10.3389/fpsyg.2018.01984] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 09/27/2018] [Indexed: 11/25/2022] Open
Abstract
Previous research in the visual domain suggests that exogenous attention in form of peripheral cueing increases spatial but lowers temporal resolution. It is unclear whether this effect transfers to other sensory modalities. Here, we tested the effects of exogenous attention on temporal and spectral resolution in the auditory domain. Eighteen young, normal-hearing adults were tested in both gap and frequency change detection tasks with exogenous cuing. Benefits of valid cuing were only present in the gap detection task while costs of invalid cuing were observed in both tasks. Our results suggest that exogenous attention in the auditory system improves temporal resolution without compromising spectral resolution.
Collapse
Affiliation(s)
- Basak Günel
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Christiane M Thiel
- Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - K Jannis Hildebrandt
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
49
|
Nittrouer S, Krieg LM, Lowenstein JH. Speech Recognition in Noise by Children with and without Dyslexia: How is it Related to Reading? RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 77:98-113. [PMID: 29724639 PMCID: PMC5947872 DOI: 10.1016/j.ridd.2018.04.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2017] [Revised: 12/20/2017] [Accepted: 04/11/2018] [Indexed: 05/08/2023]
Abstract
PURPOSE Developmental dyslexia is commonly viewed as a phonological deficit that makes it difficult to decode written language. But children with dyslexia typically exhibit other problems, as well, including poor speech recognition in noise. The purpose of this study was to examine whether the speech-in-noise problems of children with dyslexia are related to their reading problems, and if so, if a common underlying factor might explain both. The specific hypothesis examined was that a spectral processing disorder results in these children receiving smeared signals, which could explain both the diminished sensitivity to phonological structure - leading to reading problems - and the speech recognition in noise difficulties. The alternative hypothesis tested in this study was that children with dyslexia simply have broadly based language deficits. PARTICIPANTS Ninety-seven children between the ages of 7 years; 10 months and 12 years; 9 months participated: 46 with dyslexia and 51 without dyslexia. METHODS Children were tested on two dependent measures: word reading and recognition in noise with two types of sentence materials: as unprocessed (UP) signals, and as spectrally smeared (SM) signals. Data were collected for four predictor variables: phonological awareness, vocabulary, grammatical knowledge, and digit span. RESULTS Children with dyslexia showed deficits on both dependent and all predictor variables. Their scores for speech recognition in noise were poorer than those of children without dyslexia for both the UP and SM signals, but by equivalent amounts across signal conditions indicating that they were not disproportionately hindered by spectral distortion. Correlation analyses on scores from children with dyslexia showed that reading ability and speech-in-noise recognition were only mildly correlated, and each skill was related to different underlying abilities. CONCLUSIONS No substantial evidence was found to support the suggestion that the reading and speech recognition in noise problems of children with dyslexia arise from a single factor that could be defined as a spectral processing disorder. The reading and speech recognition in noise deficits of these children appeared to be largely independent.
Collapse
|
50
|
Buss E, Grose J. Auditory sensitivity to spectral modulation phase reversal as a function of modulation depth. PLoS One 2018; 13:e0195686. [PMID: 29621338 PMCID: PMC5886689 DOI: 10.1371/journal.pone.0195686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 03/27/2018] [Indexed: 11/19/2022] Open
Abstract
The present study evaluated auditory sensitivity to spectral modulation by determining the modulation depth required to detect modulation phase reversal. This approach may be preferable to spectral modulation detection with a spectrally flat standard, since listeners appear unable to perform the task based on the detection of temporal modulation. While phase reversal thresholds are often evaluated by holding modulation depth constant and adjusting modulation rate, holding rate constant and adjusting modulation depth supports rate-specific assessment of modulation processing. Stimuli were pink noise samples, filtered into seven octave-wide bands (0.125–8 kHz) and spectrally modulated in dB. Experiment 1 measured performance as a function of modulation depth to determine appropriate units for adaptive threshold estimation. Experiment 2 compared thresholds in dB for modulation detection with a flat standard and modulation phase reversal; results supported the idea that temporal cues were available at high rates for the former but not the latter. Experiment 3 evaluated spectral modulation phase reversal thresholds for modulation that was restricted to either one or two neighboring bands. Flanking bands of unmodulated noise had a larger detrimental effect on one-band than two-band targets. Thresholds for high-rate modulation improved with increasing carrier frequency up to 2 kHz, whereas low-rate modulation appeared more consistent across frequency, particularly in the two-band condition. Experiment 4 measured spectral weights for spectral modulation phase reversal detection and found higher weights for bands in the spectral center of the stimulus than for the lowest (0.125 kHz) or highest (8 kHz) band. Experiment 5 compared performance for highly practiced and relatively naïve listeners, and found weak evidence of a larger practice effect at high than low spectral modulation rates. These results provide preliminary data for a task that may provide a better estimate of sensitivity to spectral modulation than spectral modulation detection with a flat standard.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- * E-mail:
| | - John Grose
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
| |
Collapse
|