1
|
Kates JM, Lavandier M, Muralimanohar RK, Lundberg EMH, Arehart KH. Binaural speech intelligibility for combinations of noise, reverberation, and hearing-aid signal processing. PLoS One 2025; 20:e0317266. [PMID: 39813264 PMCID: PMC11734965 DOI: 10.1371/journal.pone.0317266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 12/25/2024] [Indexed: 01/18/2025] Open
Abstract
Binaural speech intelligibility in rooms is a complex process that is affected by many factors including room acoustics, hearing loss, and hearing aid (HA) signal processing. Intelligibility is evaluated in this paper for a simulated room combined with a simulated hearing aid. The test conditions comprise three spatial configurations of the speech and noise sources, simulated anechoic and concert hall acoustics, three amounts of multitalker babble interference, the hearing status of the listeners, and three degrees of simulated HA processing provided to compensate for the noise and/or hearing loss. The impact of these factors and their interactions is considered for normal-hearing (NH) and hearing-impaired (HI) listeners for sentence stimuli. Both listener groups showed a significant reduction in intelligibility as the signal-to-noise ratio (SNR) decreased, and showed a reduction in intelligibility in reverberation when compared to anechoic listening. There was no significant improvement in intelligibility for the NH group for the noise suppression algorithm used here, and no significant improvement in intelligibility for the HI group for more advanced HA processing algorithms as opposed to linear amplification in either of the two acoustic spaces or at any of the three SNRs.
Collapse
Affiliation(s)
- James M. Kates
- Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America
| | - Mathieu Lavandier
- ENPTE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin, France
| | - Ramesh Kumar Muralimanohar
- Department of Communication Sciences and Disorders, University of Northern Colorado, Greeley, Colorado, United States of America
| | - Emily M. H. Lundberg
- Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America
| | - Kathryn H. Arehart
- Deptartment of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado, United States of America
| |
Collapse
|
2
|
Caso A, Griffiths TD, Holmes E. Spatial selective auditory attention is preserved in older age but is degraded by peripheral hearing loss. Sci Rep 2024; 14:26243. [PMID: 39482327 PMCID: PMC11527878 DOI: 10.1038/s41598-024-77102-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 10/18/2024] [Indexed: 11/03/2024] Open
Abstract
Interest in how ageing affects attention is long-standing, although interactions between sensory and attentional processing in older age are not fully understood. Here, we examined interactions between peripheral hearing and selective attention in a spatialised cocktail party listening paradigm, in which three talkers spoke different sentences simultaneously and participants were asked to report the sentence spoken by a talker at a particular location. By comparing a sample of older (N = 61; age = 55-80 years) and younger (N = 58; age = 18-35 years) adults, we show that, as a group, older adults benefit as much as younger adults from preparatory spatial attention. Although, for older adults, this benefit significantly reduces with greater age-related hearing loss. These results demonstrate that older adults with excellent hearing retain the ability to direct spatial selective attention, but this ability deteriorates, in a graded manner, with age-related hearing loss. Thus, reductions in spatial selective attention likely contribute to difficulties communicating in social settings for older adults with age-related hearing loss. Overall, these findings demonstrate a relationship between mild perceptual decline and attention in older age.
Collapse
Affiliation(s)
- Andrea Caso
- Department of Speech Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 3PF, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Human Brain Research Laboratory, University of Iowa, Iowa City, IA, USA
| | - Emma Holmes
- Department of Speech Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 3PF, UK.
| |
Collapse
|
3
|
Li JY, Wang X, Nie S, Zhu MY, Liu JX, Wei L, Li H, Wang NY, Zhang J. Neural encoding for spatial release from informational masking and its correlation with behavioral metrics. J Neurophysiol 2024; 132:1265-1277. [PMID: 39258777 DOI: 10.1152/jn.00279.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 09/09/2024] [Accepted: 09/09/2024] [Indexed: 09/12/2024] Open
Abstract
The central auditory system encompasses two primary functions: identification and localization. Spatial release from masking (SRM) highlights speech recognition in competing noise and improves the listening experience when a spatial cue is introduced between noise and target speech. This assessment focuses on the integrity of auditory function and holds clinical significance. However, infants or pre-lingual subjects sometimes provide less reliable results. This study investigates the value of cortical auditory evoked potentials (CAEPs) onset and acoustic change complex (ACC) as an objective measurement of SRM. Thirty normal-hearing young adults (11 males) were recruited. We found the spatial separation of signals and noise (±90° symmetrically) resulted in a signal-to-noise ratio (SNR) improvement of 9.00 ± 1.71 dB behaviorally. It significantly enhanced cortical processing at all SNR levels, shortened CAEP latencies, and increased amplitudes, resulting in a greater number of measurable peaks for ACC. SRM showed mild to moderate correlations with the differences between two conditions in CAEP measures. The regression model combining N1'-P2' amplitude at 5 dB SNR (R2 = 0.26), P1 amplitude at 0 dB SNR (R2 = 0.14), and P1 latency at -5 dB SNR (R2 = 0.15), explained 45.3% of the variance in SRM. Our study demonstrates that introducing spatial cues can improve speech perception and enhance central auditory processing in normal-hearing young adults. CAEPs may contribute to predictions about SRM and hold potential for practical application.NEW & NOTEWORTHY The neural encoding of spatial release from masking (SRM) can be observed in normal-hearing young adults. Spatial separation between target and masker improves speech perception in noise and enhances central auditory processing. The behavioral results showed mild-to-moderate correlations with electrophysiological measures, with acoustic change complex (ACC) amplitude being a better indicator than onset components. Cortical auditory evoked potentials (CAEPs) may contribute to predictions about spatial release from masking, especially when behavioral tests are less reliable.
Collapse
Affiliation(s)
- Jia-Ying Li
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xing Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Shuai Nie
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Meng-Yuan Zhu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Jia-Xing Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Lai Wei
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Huan Li
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Ning-Yu Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Juan Zhang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chao-yang Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
4
|
Sheffield SW, Wheeler HJ, Brungart DS, Bernstein JGW. The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment. Trends Hear 2023; 27:23312165231186040. [PMID: 37415497 PMCID: PMC10331332 DOI: 10.1177/23312165231186040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/13/2023] [Accepted: 06/17/2023] [Indexed: 07/08/2023] Open
Abstract
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| | - Harley J. Wheeler
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas S. Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| |
Collapse
|
5
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
6
|
Best V, Baltzell LS, Colburn HS. Effects of Hearing Loss on Interaural Time Difference Sensitivity at Low and High Frequencies. Trends Hear 2022; 26:23312165221095357. [PMID: 35754372 PMCID: PMC9244940 DOI: 10.1177/23312165221095357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
While many studies have reported a loss of sensitivity to interaural time differences (ITDs) carried in the fine structure of low-frequency signals for listeners with hearing loss, relatively few data are available on the perception of ITDs carried in the envelope of high-frequency signals in this population. The relevant studies found stronger effects of hearing loss at high frequencies than at low frequencies in most cases, but small subject numbers and several confounding effects prevented strong conclusions from being drawn. In the present study, we revisited this question while addressing some of the issues identified in previous studies. Participants were ten young adults with normal hearing (NH) and twenty adults with sensorineural hearing impairment (HI) spanning a range of ages. ITD discrimination thresholds were measured for octave-band-wide “rustle” stimuli centered at 500 Hz or 4000 Hz, which were presented at 20 or 40 dB sensation level. Broadband rustle stimuli and 500-Hz pure-tone stimuli were also tested. Thresholds were poorer on average for the HI group than the NH group. The ITD deficit, relative to the NH group, was similar at low and high frequencies for most HI participants. For a small number of participants, however, the deficit was strongly frequency-dependent. These results provide new insights into the binaural perception of complex sounds and may inform binaural models that incorporate effects of hearing loss.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, 1846Boston University, Boston, MA, United States
| | - Lucas S Baltzell
- Department of Speech, Language and Hearing Sciences, 1846Boston University, Boston, MA, United States
| | - H Steven Colburn
- Department of Biomedical Engineering, 1846Boston University, Boston, MA, United States
| |
Collapse
|
7
|
Oh Y, Hartling CL, Srinivasan NK, Diedesch AC, Gallun FJ, Reiss LAJ. Factors underlying masking release by voice-gender differences and spatial separation cues in multi-talker listening environments in listeners with and without hearing loss. Front Neurosci 2022; 16:1059639. [PMID: 36507363 PMCID: PMC9726925 DOI: 10.3389/fnins.2022.1059639] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 11/07/2022] [Indexed: 11/24/2022] Open
Abstract
Voice-gender differences and spatial separation are important cues for auditory object segregation. The goal of this study was to investigate the relationship of voice-gender difference benefit to the breadth of binaural pitch fusion, the perceptual integration of dichotic stimuli that evoke different pitches across ears, and the relationship of spatial separation benefit to localization acuity, the ability to identify the direction of a sound source. Twelve bilateral hearing aid (HA) users (age from 30 to 75 years) and eleven normal hearing (NH) listeners (age from 36 to 67 years) were tested in the following three experiments. First, speech-on-speech masking performance was measured as the threshold target-to-masker ratio (TMR) needed to understand a target talker in the presence of either same- or different-gender masker talkers. These target-masker gender combinations were tested with two spatial configurations (maskers co-located or 60° symmetrically spatially separated from the target) in both monaural and binaural listening conditions. Second, binaural pitch fusion range measurements were conducted using harmonic tone complexes around a 200-Hz fundamental frequency. Third, absolute localization acuity was measured using broadband (125-8000 Hz) noise and one-third octave noise bands centered at 500 and 3000 Hz. Voice-gender differences between target and maskers improved TMR thresholds for both listener groups in the binaural condition as well as both monaural (left ear and right ear) conditions, with greater benefit in co-located than spatially separated conditions. Voice-gender difference benefit was correlated with the breadth of binaural pitch fusion in the binaural condition, but not the monaural conditions, ruling out a role of monaural abilities in the relationship between binaural fusion and voice-gender difference benefits. Spatial separation benefit was not significantly correlated with absolute localization acuity. In addition, greater spatial separation benefit was observed in NH listeners than in bilateral HA users, indicating a decreased ability of HA users to benefit from spatial release from masking (SRM). These findings suggest that sharp binaural pitch fusion may be important for maximal speech perception in multi-talker environments for both NH listeners and bilateral HA users.
Collapse
Affiliation(s)
- Yonghee Oh
- Department of Otolaryngology and Communicative Disorders, University of Louisville, Louisville, KY, United States
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- *Correspondence: Yonghee Oh,
| | - Curtis L. Hartling
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Nirmal Kumar Srinivasan
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, United States
| | - Anna C. Diedesch
- Department of Communication Sciences and Disorders, Western Washington University, Bellingham, WA, United States
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Lina A. J. Reiss
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
8
|
Theodoroff SM, Gallun FJ, McMillan GP, Molis M, Srinivasan N, Gordon J, McDermott D, Konrad-Martin D. Impacts of Diabetes, Aging, and Hearing Loss on Speech-on-Speech Masking and Spatial Release in a Large Veteran Cohort. Am J Audiol 2021; 30:1023-1036. [PMID: 34633838 DOI: 10.1044/2021_aja-21-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Type 2 diabetes mellitus (DM2) is associated with impaired hearing. However, the evidence is less clear if DM2 can lead to difficulty understanding speech in complex acoustic environments, independently of age and hearing loss effects. The purpose of this study was to estimate the magnitude of DM2-related effects on speech understanding in the presence of competing speech after adjusting for age and hearing. METHOD A cross-sectional study design was used to investigate the relationship between DM2 and speech understanding in 190 Veterans (M age = 47 years, range: 25-76). Participants were classified as having no diabetes (n = 74), prediabetes (n = 19), or DM2 that was well controlled (n = 24) or poorly controlled (n = 73). A test of spatial release from masking (SRM) was presented in a virtual acoustical simulation over insert earphones with multiple talkers using sentences from the coordinate response measure corpus to determine the target-to-masker ratio (TMR) required for 50% correct identification of target speech. A linear mixed model of the TMR results was used to estimate SRM and separate effects of diabetes group, age, and low-frequency pure-tone average (PTA-low) and high-frequency pure-tone average. A separate model estimated the effects of DM2 on PTA-low. RESULTS After adjusting for hearing and age, diabetes-related effects remained among those whose DM2 was well controlled, showing an SRM loss of approximately 0.5 dB. Results also showed effects of hearing loss and age, consistent with the literature on people without DM2. Low-frequency hearing loss was greater among those with DM2. CONCLUSIONS In a large cohort of Veterans, low-frequency hearing loss and older age negatively impact speech understanding. Compared with nondiabetics, individuals with controlled DM2 have additional auditory deficits beyond those associated with hearing loss or aging. These results provide a potential explanation for why individuals who have diabetes and/or are older often report difficulty understanding speech in real-world listening environments. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.16746475.
Collapse
Affiliation(s)
- Sarah M. Theodoroff
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
- Department of Otolaryngology—Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Frederick J. Gallun
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
- Department of Otolaryngology—Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Garnett P. McMillan
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
| | - Michelle Molis
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
- Department of Otolaryngology—Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Nirmal Srinivasan
- Department of Speech-Language Pathology & Audiology, Towson University, MD
| | - Jane Gordon
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
| | - Daniel McDermott
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
| | - Dawn Konrad-Martin
- VA Rehabilitation Research and Development Service, National Center for Rehabilitative Auditory Research, VA Portland Health Care System, United States Department of Veterans Affairs, OR
- Department of Otolaryngology—Head & Neck Surgery, Oregon Health & Science University, Portland
| |
Collapse
|
9
|
Lavandier M, Mason CR, Baltzell LS, Best V. Individual differences in speech intelligibility at a cocktail party: A modeling perspective. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1076. [PMID: 34470293 PMCID: PMC8561716 DOI: 10.1121/10.0005851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 07/07/2021] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target speech level at SRT in a given condition and (b) introducing a floor in the model to limit extreme negative better-ear signal-to-noise ratios. The floor value was first set using SRTs measured with stationary and modulated noises. The model was then used to account for individual variations in SRTs found in two previously published data sets that used speech maskers. The model accounted well for the variation in SRTs across listeners with hearing loss, based solely on differences in audibility. When considering listeners with normal hearing, the model could predict the best SRTs, but not the poorer SRTs, suggesting that other factors limit performance when audibility (as measured with the audiogram) is not compromised.
Collapse
Affiliation(s)
- Mathieu Lavandier
- Univ. Lyon, ENTPE, Laboratoire de Tribologie et Dynamique des Systèmes UMR 5513, Rue Maurice Audin, F-69518 Vaulx-en-Velin Cedex, France
| | - Christine R Mason
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Lucas S Baltzell
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
10
|
Yun D, Jennings TR, Kidd G, Goupell MJ. Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
Affiliation(s)
- David Yun
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Todd R Jennings
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
11
|
Gallun FJ. Impaired Binaural Hearing in Adults: A Selected Review of the Literature. Front Neurosci 2021; 15:610957. [PMID: 33815037 PMCID: PMC8017161 DOI: 10.3389/fnins.2021.610957] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/19/2021] [Indexed: 11/17/2022] Open
Abstract
Despite over 100 years of study, there are still many fundamental questions about binaural hearing that remain unanswered, including how impairments of binaural function are related to the mechanisms of binaural hearing. This review focuses on a number of studies that are fundamental to understanding what is known about the effects of peripheral hearing loss, aging, traumatic brain injury, strokes, brain tumors, and multiple sclerosis (MS) on binaural function. The literature reviewed makes clear that while each of these conditions has the potential to impair the binaural system, the specific abilities of a given patient cannot be known without performing multiple behavioral and/or neurophysiological measurements of binaural sensitivity. Future work in this area has the potential to bring awareness of binaural dysfunction to patients and clinicians as well as a deeper understanding of the mechanisms of binaural hearing, but it will require the integration of clinical research with animal and computational modeling approaches.
Collapse
Affiliation(s)
- Frederick J. Gallun
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States
| |
Collapse
|
12
|
Baltzell LS, Cho AY, Swaminathan J, Best V. Spectro-temporal weighting of interaural time differences in speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3883. [PMID: 32611137 PMCID: PMC7297545 DOI: 10.1121/10.0001418] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/06/2020] [Accepted: 05/18/2020] [Indexed: 05/19/2023]
Abstract
Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 μs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Adrian Y Cho
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jayaganesh Swaminathan
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
13
|
Moore BCJ. Effects of hearing loss and age on the binaural processing of temporal envelope and temporal fine structure information. Hear Res 2020; 402:107991. [PMID: 32418682 DOI: 10.1016/j.heares.2020.107991] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/24/2020] [Accepted: 05/05/2020] [Indexed: 11/28/2022]
Abstract
Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each with a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV is conveyed by the timing and short-term rate of action potentials in the auditory nerve while information about TFS is conveyed by synchronization of action potentials to a specific phase of the waveform in the cochlea (phase locking). This paper describes the effects of age and hearing loss on the binaural processing of ENV and TFS information, i.e. on the processing of differences in ENV and TFS at the two ears. The binaural processing of TFS information is adversely affected by both hearing loss and increasing age. The binaural processing of ENV information deteriorates somewhat with increasing age but is only slightly affected by hearing loss. The reduced TFS processing abilities found for older/hearing-impaired subjects may partially account for the difficulties that such subjects experience in complex listening situations when the target speech and interfering sounds come from different directions in space.
Collapse
Affiliation(s)
- Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|