1
|
Baltzell LS, Kokkinakis K, Li A, Yellamsetty A, Teece K, Nelson PB. Validation of a Self-Fitting Over-the-Counter Hearing Aid Intervention Compared with a Clinician-Fitted Hearing Aid Intervention: A Within-Subjects Crossover Design Using the Same Device. Trends Hear 2025; 29:23312165251328055. [PMID: 40129389 PMCID: PMC11938855 DOI: 10.1177/23312165251328055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 02/15/2025] [Accepted: 02/28/2025] [Indexed: 03/26/2025] Open
Abstract
In October of 2022, the US Food and Drug Administration finalized regulations establishing the category of self-fitting over-the-counter (OTC) hearing aids, intended to reduce barriers to hearing aid adoption for individuals with self-perceived mild to moderate hearing loss. Since then a number of self-fitting OTC hearing aids have entered the market, and a small number of published studies have demonstrated the effectiveness of a self-fitted OTC intervention against a traditional clinician-fitted intervention. Given the variety of self-fitting approaches available, and the small number of studies demonstrating effectiveness, the goal of the present study was to evaluate the effectiveness of a commercially available self-fitting OTC hearing aid intervention against a clinician-fitted intervention. Consistent with previous studies, we found that the self-fitted intervention was not inferior to the clinician-fitted intervention for self-reported benefit and objective speech-in-noise outcomes. We found statistically significant improvements in self-fitted outcomes compared to clinician-fitted outcomes, though deviations from best audiological practices in our clinician-fitted intervention may have influenced our results. In addition to presenting our results, we discuss the state of evaluating the noninferiority of self-fitted interventions and offer some new perspectives.
Collapse
Affiliation(s)
| | | | - Amy Li
- Concha Labs, San Mateo, CA, USA
| | | | - Katherine Teece
- Department of Speech-Language-Hearing Sciences, University of Minnesota–Twin Cities, Minneapolis, MN, USA
| | - Peggy B. Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota–Twin Cities, Minneapolis, MN, USA
| |
Collapse
|
2
|
Zaar J, Simonsen LB, Sanchez-Lopez R, Laugesen S. The Audible Contrast Threshold (ACT) test: A clinical spectro-temporal modulation detection test. Hear Res 2024; 453:109103. [PMID: 39243488 DOI: 10.1016/j.heares.2024.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/12/2024] [Accepted: 08/12/2024] [Indexed: 09/09/2024]
Abstract
Over the last decade, multiple studies have shown that hearing-impaired listeners' speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise "waves" (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the "normalized Contrast Level" (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.
Collapse
Affiliation(s)
- Johannes Zaar
- Eriksholm Research Centre, Rørtangvej 20, 3070 Snekkersten, Denmark; Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| | - Lisbeth Birkelund Simonsen
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Raul Sanchez-Lopez
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Institute of Globally Distributed Open Research and Education (IGDORE)
| | - Søren Laugesen
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| |
Collapse
|
3
|
Helfer KS, Maldonado L, Matthews LJ, Simpson AN, Dubno JR. Extended High-Frequency Thresholds: Associations With Demographic and Risk Factors, Cognitive Ability, and Hearing Outcomes in Middle-Aged and Older Adults. Ear Hear 2024; 45:1427-1443. [PMID: 38987892 PMCID: PMC11493509 DOI: 10.1097/aud.0000000000001531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
OBJECTIVES This study had two objectives: to examine associations between extended high-frequency (EHF) thresholds, demographic factors (age, sex, race/ethnicity), risk factors (cardiovascular, smoking, noise exposure, occupation), and cognitive abilities; and to determine variance explained by EHF thresholds for speech perception in noise, self-rated workload/effort, and self-reported hearing difficulties. DESIGN This study was a retrospective analysis of a data set from the MUSC Longitudinal Cohort Study of Age-related Hearing Loss. Data from 347 middle-aged adults (45 to 64 years) and 694 older adults (≥ 65 years) were analyzed for this study. Speech perception was quantified using low-context Speech Perception In Noise (SPIN) sentences. Self-rated workload/effort was measured using the effort prompt from the National Aeronautics and Space Administration-Task Load Index. Self-reported hearing difficulty was assessed using the Hearing Handicap Inventory for the Elderly/Adults. The Wisconsin Card Sorting Task and the Stroop Neuropsychological Screening Test were used to assess selected cognitive abilities. Pure-tone averages representing conventional and EHF thresholds between 9 and 12 kHz (PTA (9 - 12 kHz) ) were utilized in simple linear regression analyses to examine relationships between thresholds and demographic and risk factors or in linear regression models to assess the contributions of PTA (9 - 12 kHz) to the variance among the three outcomes of interest. Further analyses were performed on a subset of individuals with thresholds ≤ 25 dB HL at all conventional frequencies to control for the influence of hearing loss on the association between PTA (9 - 12 kHz) and outcome measures. RESULTS PTA (9 - 12 kHz) was higher in males than females, and was higher in White participants than in racial Minority participants. Linear regression models showed the associations between cardiovascular risk factors and PTA (9 - 12 kHz) were not statistically significant. Older adults who reported a history of noise exposure had higher PTA (9 - 12 kHz) than those without a history, while associations between noise history and PTA (9 - 12 kHz) did not reach statistical significance for middle-aged participants. Linear models adjusting for age, sex, race and noise history showed that higher PTA (9 - 12 kHz) was associated with greater self-perceived hearing difficulty and poorer speech recognition scores in noise for both middle-aged and older participants. Workload/effort was significantly related to PTA (9 - 12 kHz) for middle-aged, but not older, participants, while cognitive task performance was correlated with PTA (9 - 12 kHz) only for older participants. In general, PTA (9 - 12 kHz) did not account for additional variance in outcome measures as compared to conventional pure-tone thresholds, with the exception of self-reported hearing difficulties in older participants. Linear models adjusting for age and accounting for subject-level correlations in the subset analyses revealed no association between PTA (9 - 12 kHz) and outcomes of interest. CONCLUSIONS EHF thresholds show age-, sex-, and race-related patterns of elevation that are similar to what is observed for conventional thresholds. The current results support the need for more research to determine the utility of adding EHF thresholds to routine audiometric assessment with middle-aged and older adults.
Collapse
|
4
|
Delaram V, Miller MK, Ananthanarayana RM, Trine A, Buss E, Stecker GC, Monson BB. Gender and speech material effects on the long-term average speech spectrum, including at extended high frequencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3056-3066. [PMID: 39499044 PMCID: PMC11540443 DOI: 10.1121/10.0034231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 09/20/2024] [Accepted: 10/09/2024] [Indexed: 11/07/2024]
Abstract
Gender and language effects on the long-term average speech spectrum (LTASS) have been reported, but typically using recordings that were bandlimited and/or failed to accurately capture extended high frequencies (EHFs). Accurate characterization of the full-band LTASS is warranted given recent data on the contribution of EHFs to speech perception. The present study characterized the LTASS for high-fidelity, anechoic recordings of males and females producing Bamford-Kowal-Bench sentences, digits, and unscripted narratives. Gender had an effect on spectral levels at both ends of the spectrum: males had higher levels than females below approximately 160 Hz, owing to lower fundamental frequencies; females had ∼4 dB higher levels at EHFs, but this effect was dependent on speech material. Gender differences were also observed at ∼300 Hz, and between 800 and 1000 Hz, as previously reported. Despite differences in phonetic content, there were only small, gender-dependent differences in EHF levels across speech materials. EHF levels were highly correlated across materials, indicating relative consistency within talkers. Our findings suggest that LTASS levels at EHFs are influenced primarily by talker and gender, highlighting the need for future research to assess whether EHF cues are more audible for female speech than for male speech.
Collapse
Affiliation(s)
- Vahid Delaram
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Margaret K Miller
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska 68131, USA
| | - Rohit M Ananthanarayana
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Allison Trine
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - G Christopher Stecker
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska 68131, USA
| | - Brian B Monson
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
- Neuroscience Program, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
- Department of Biomedical and Translational Sciences, Carle Illinois College of Medicine, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| |
Collapse
|
5
|
Urichuk M, Purcell D, Allen P, Scollie S. Validation of an integrated pressure level measured earmold wideband real-ear-to-coupler difference measurement. Int J Audiol 2024; 63:604-612. [PMID: 37722804 DOI: 10.1080/14992027.2023.2254934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 08/18/2023] [Accepted: 08/24/2023] [Indexed: 09/20/2023]
Abstract
OBJECTIVE To validate measurement of predicted earmold wideband real-ear-to-coupler difference (wRECD) using an integrated pressure level (IPL) calibrated transducer and the incorporation of an acoustically measured tubing length correction. DESIGN Unilateral earmold SPL wRECD using varied hearing aid tubing length and the proposed predicted earmold IPL wRECD measurement procedure were completed on all participants and compared. STUDY SAMPLE 22 normal hearing adults with normal middle ear status were recruited. RESULTS There were no clinically significant differences between probe-microphone and predicted earmold IPL wRECD measurements between 500 and 2500 Hz. Above 5000 Hz, the predicted earmold IPL wRECD exceeded earmold SPL wRECDs due to lack of standing wave interference. Test-retest reliability of IPL wRECD measurement exceeded the reliability of earmold SPL wRECD measurement across all assessed frequencies, with the greatest improvements in the high frequencies. The acoustically measured tubing length correction largely accounted for acoustic effects of the participant's earmold. CONCLUSIONS IPL-based measurements provide a promising alternative to probe-microphone earmold wRECD procedures. Predicted earmold IPL wRECD is measured without probe-microphone placement, agrees well with earmold SPL wRECDs and is expected to extend the valid bandwidth of wRECD measurement.
Collapse
Affiliation(s)
- Matthew Urichuk
- Faculty of Health Sciences, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Faculty of Health Sciences, Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
| | - David Purcell
- Faculty of Health Sciences, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Faculty of Health Sciences, Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
- Faculty of Health Sciences, National Center for Audiology, Western University, London, Ontario, Canada
| | - Prudence Allen
- Faculty of Health Sciences, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Faculty of Health Sciences, Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
- Faculty of Health Sciences, National Center for Audiology, Western University, London, Ontario, Canada
| | - Susan Scollie
- Faculty of Health Sciences, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Faculty of Health Sciences, Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
- Faculty of Health Sciences, National Center for Audiology, Western University, London, Ontario, Canada
| |
Collapse
|
6
|
Wang X, Ge J, Meller L, Yang Y, Zeng FG. Speech intelligibility and talker identification with non-telephone frequencies. JASA EXPRESS LETTERS 2024; 4:075202. [PMID: 39046893 DOI: 10.1121/10.0027938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 06/28/2024] [Indexed: 07/27/2024]
Abstract
Although the telephone band (0.3-3 kHz) provides sufficient information for speech recognition, the contribution of the non-telephone band (<0.3 and >3 kHz) is unclear. To investigate its contribution, speech intelligibility and talker identification were evaluated using consonants, vowels, and sentences. The non-telephone band produced relatively good intelligibility for consonants (76.0%) and sentences (77.4%), but not vowels (11.5%). The non-telephone band supported good talker identification only with sentences (74.5%), but not vowels (45.8%) or consonants (10.8%). Furthermore, the non-telephone band cannot produce satisfactory speech intelligibility in noise at the sentence level, suggesting the importance of full-band access in realistic listening.
Collapse
Affiliation(s)
- Xianhui Wang
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, and Otolaryngology-Head and Neck Surgery, University of California Irvine, Irvine, California 92697, USA
| | - Jonathan Ge
- Warren Alpert School of Medicine, Brown University, Providence, Rhode Island 02903, USA
| | - Leo Meller
- School of Medicine, University of California San Diego, La Jolla, California 92093, , , , ,
| | - Ye Yang
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, and Otolaryngology-Head and Neck Surgery, University of California Irvine, Irvine, California 92697, USA
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, and Otolaryngology-Head and Neck Surgery, University of California Irvine, Irvine, California 92697, USA
| |
Collapse
|
7
|
Roy A, Bradlow A, Souza P. Effect of frequency compression on fricative perception between normal-hearing English and Mandarin listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3957-3967. [PMID: 38921646 DOI: 10.1121/10.0026435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/26/2024] [Indexed: 06/27/2024]
Abstract
High-frequency speech information is susceptible to inaccurate perception in even mild to moderate forms of hearing loss. Some hearing aids employ frequency-lowering methods such as nonlinear frequency compression (NFC) to help hearing-impaired individuals access high-frequency speech information in more accessible lower-frequency regions. As such techniques cause significant spectral distortion, tests such as the S-Sh Confusion Test help optimize NFC settings to provide high-frequency audibility with the least distortion. Such tests have been traditionally based on speech contrasts pertinent to English. Here, the effects of NFC processing on fricative perception between English and Mandarin listeners are assessed. Small but significant differences in fricative discrimination were observed between the groups. The study demonstrates possible need for language-specific clinical fitting procedures for NFC.
Collapse
Affiliation(s)
- Abhijit Roy
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| | - Ann Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
8
|
Urichuk M, Purcell D, Scollie S. Validity and reliability of integrated pressure level real-ear-to-coupler difference measurements. Int J Audiol 2024; 63:401-410. [PMID: 37129231 DOI: 10.1080/14992027.2023.2205009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 04/06/2023] [Indexed: 05/03/2023]
Abstract
OBJECTIVES (1) To validate the measurement of foam-tip real-ear-to-coupler differences (wRECD) using an integrated pressure level (IPL) method and (2) to compare the reliability of this method to SPL-based measurement of the wRECD. DESIGN SPL-based wRECD and the proposed IPL wRECD measurement were completed bilaterally. Test-retest reliability of IPL wRECD was determined with full re-insertion into the ear canal and compared to published SPL wRECD test-retest data. STUDY SAMPLE 22 adults with normal hearing and middle ear status were recruited. RESULTS Differences between SPL-based wRECD and IPL wRECD measurements were within 1.51 dB on average below 5000 Hz. At and above 5000 Hz, IPL wRECD exceeded SPL wRECDs by 6.11 dB on average. The average test-retest difference for IPL wRECD across all assessed frequencies was 0.75 dB with the greatest improvements in reliability found below 750 Hz and above 3000 Hz. CONCLUSIONS IPL wRECD yielded improved estimates compared to SPL wRECD in high frequencies, where standing-wave interference is present. Independence from standing wave interference resulted in increased wRECD values above 4000 Hz using the IPL measurement paradigm. IPL wRECD is more reliable than SPL wRECD, does not require precise probe-microphone placement, and provides a wider valid wRECD bandwidth than SPL-based measurement.
Collapse
Affiliation(s)
- Matthew Urichuk
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
| | - David Purcell
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
- National Center for Audiology, Western University, London, Ontario, Canada
| | - Susan Scollie
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- Health and Rehabilitation Sciences Graduate Program, Western University, London, Ontario, Canada
- National Center for Audiology, Western University, London, Ontario, Canada
| |
Collapse
|
9
|
Brennan MA, Rasetshwane DM, Kopun JG, McCreery RW. The Influence of the Stimulus Level Used to Prescribe Nonlinear Frequency Compression on Speech Perception. J Am Acad Audiol 2024; 35:135-143. [PMID: 38290549 PMCID: PMC11728114 DOI: 10.1055/a-2257-2985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
BACKGROUND Nonlinear frequency compression (NFC) is a signal processing technique designed to lower high-frequency inaudible sounds for a listener to a lower frequency that is audible. Because the maximum frequency that is audible to a listener with hearing loss will vary with the input speech level, the input level used to set NFC could impact speech recognition. PURPOSE The purpose of this study was to determine the influence of the input level used to set NFC on nonsense syllable recognition. RESEARCH DESIGN Nonsense syllable recognition was measured for three NFC fitting conditions-with NFC set based on speech input levels of 50, 60, and 70 dB SPL, respectively, as well as without NFC (restricted bandwidth condition). STUDY SAMPLE Twenty-three adults (ages 42-80 years old) with hearing loss. DATA COLLECTION AND ANALYSIS Data were collected, monaurally, using a hearing aid simulator. The start frequency and frequency compression ratios were set based on the SoundRecover Fitting Assistant. Speech stimuli were 657 consonant-vowel-consonant nonwords presented at 50, 60, and 70 dB SPL and mixed with steady noise (6 dB signal-to-noise ratio) and scored based on entire word, initial consonant, vowel, and final consonant. Linear mixed effects examined the effects of NFC fitting condition, presentation level, and scoring method on percent correct recognition. Additional predictor variables of start frequency and frequency-compression ratio were examined. RESULTS Nonsense syllable recognition increased as presentation level increased. Nonsense syllable recognition for all presentation levels was highest when NFC was set based on the 70 dB SPL input level and decreased significantly when set based on the 60 and 50 dB SPL inputs. Relative to consonant recognition, there was a greater reduction in vowel recognition. Nonsense syllable recognition between NFC fitting conditions improved with increases in the start frequency, where higher start frequencies led to better nonsense word recognition. CONCLUSION Nonsense syllable recognition was highest when setting NFC based on a 70 dB SPL presentation level and suggest that a high presentation level should be used to determine NFC parameters for an individual patient.
Collapse
Affiliation(s)
- Marc A Brennan
- Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska
| | - Daniel M Rasetshwane
- Hearing and Speech Perception Research, Boys Town National Research Hospital, Omaha, Nebraska
| | - Judy G Kopun
- Hearing and Speech Perception Research, Boys Town National Research Hospital, Omaha, Nebraska
| | - Ryan W McCreery
- Hearing and Speech Perception Research, Boys Town National Research Hospital, Omaha, Nebraska
| |
Collapse
|
10
|
Job K, Wiatr A, Skladzien J, Wiatr M. The Audiometric Assessment of the Effectiveness of Surgical Treatment of Otosclerosis Depending on the Preoperative Incidence of Carhart's Notch. EAR, NOSE & THROAT JOURNAL 2024; 103:241-247. [PMID: 34633243 DOI: 10.1177/01455613211043685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Objective: The presence of Carhart's notch at 2000 Hz in otosclerosis links the changed bone conduction for this frequency with the otosclerotic process occurring in the oval window. The aim of this study is to perform an audiometric assessment of the effectiveness of surgical treatment of otosclerosis depending on the incidence of Carhart's notch. Methods: The analysis included 116 patients treated surgically for the first time due to otosclerosis. Patients were divided into 4 groups depending on the occurrence of Carhart's notch, determined by pure-tone audiometry (PTA) before the surgery and 36 months afterward. The mean value of bone conduction thresholds was calculated for 500 Hz, 1000 Hz, 2000 Hz, and 3000 Hz in the groups in which the Cahart's notch was observed. This value of bone conduction (BC) was a reference point for further analysis in patients who had no preoperative or postoperative Carhart's notch. Results: The analysis indicated that Cahart's notch in preoperative PTA is a statistically significant improvement factor for average BC. It was found that over a longer observation period, the presence of Carhart's notch has adverse effects on the size of the postoperative air-bone gap, and consequently on hearing improvement after surgical treatment. A comparison between patients from the two groups without preoperative Carhart's notch found that no beneficial effects of the surgery on speech comprehension were observed regarding high-level sensorineural hearing loss (SNHL). Conclusions: (1) In a long-term observation post-stapedotomy, average BC values were found to improve. Nevertheless, the improvement is less evident in patients with preoperative Carhart's notch. (2) Disappearance of Cahart's notch after surgical treatment of otosclerosis is a good prognosis of improvement in speech audiometry. (3) Deep SNHL in the absence of Carhart's notch in PTA constitutes a bad prognostic factor for improvement in speech audiometry in patients qualified for surgical treatment of otosclerosis.
Collapse
Affiliation(s)
- Katarzyna Job
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Krakow, Poland
| | - Agnieszka Wiatr
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Krakow, Poland
| | - Jacek Skladzien
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Krakow, Poland
| | - Maciej Wiatr
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Krakow, Poland
| |
Collapse
|
11
|
Sassi TSDS, Bucuvic EC, Castiquini EAT, Chaves JN, Kimura M, Buzo BC, Lourençone LFM. High-Frequency Gain and Maximum Output Effects on Speech Recognition in Bone-Conduction Hearing Devices: Blinded Study. Otol Neurotol 2023; 44:1045-1051. [PMID: 37917961 PMCID: PMC10662602 DOI: 10.1097/mao.0000000000004043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
INTRODUCTION Bone-conduction hearing device (BCHD) uses natural sound transmission through bone and soft tissue, directly to the cochlea, via an external processor that captures and processes sound, which is converted into mechanical vibrations. Key parameters, as maximum power output (MPO) and broader frequency range (FR), must be considered when indicating a BCHD because they can be decisive for speech recognition, especially under listening challenge conditions. OBJECTIVES Compare hearing performance and speech recognition in noise of two sound processors (SPs), with different features of MPO and FR, among BCHD users. MATERIALS AND METHODS This single-blinded, comparative, observational study evaluated 21 individuals Baha 4 system users with conductive or mixed hearing impairment. The free-field audiometry and speech recognition results were blindly collected under the following conditions: unaided, with Baha 5, and with Baha 6 Max SP. RESULTS In free-field audiometry, significant differences were observed between the SP at 0.25, 3, 4, 6, and 8 kHz, with Baha 6 Max outperforming Baha 5. The Baha 6 Max provided significantly better speech recognition than Baha 5 under all the speech in noise conditions evaluated. Separating the transcutaneous from the percutaneous users, Baha 6 Max Attract SP provided the best results and significantly lowered the free-field thresholds than Baha 5 Attract. The Baha 6 Max also significantly improved speech recognition in noise, among both Attract and Connect users. CONCLUSION The present study revealed that the greater MPO and broader FR of the Baha 6 Max device helped increase high-frequency gain and improved speech recognition in BCHD-experimented users.
Collapse
Affiliation(s)
| | | | | | | | | | - Byanka Cagnacci Buzo
- Cochlear Latin-American, Panama Pacifico, Panama
- Santa Casa de Sao Paulo School of Medical Science, São Paulo
| | - Luiz Fernando Manzoni Lourençone
- Hospital for Rehabilitation of Craniofacial Anomalies (HRAC), Bauru
- Bauru School of Dentistry, University of São Paulo, Bauru, SP, Brazil
| |
Collapse
|
12
|
Zhang VW, Hou S, Wong A, Flynn C, Oliver J, Weiss M, Milner S, Ching TYC. Audiological characteristics of children with congenital unilateral hearing loss: insights into Age of reliable behavioural audiogram acquisition and change of hearing loss. Front Pediatr 2023; 11:1279673. [PMID: 38027307 PMCID: PMC10663346 DOI: 10.3389/fped.2023.1279673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 10/27/2023] [Indexed: 12/01/2023] Open
Abstract
Objectives The aims of this study were to report the audiological characteristics of children with congenital unilateral hearing loss (UHL), examine the age at which the first reliable behavioural audiograms can be obtained, and investigate hearing changes from diagnosis at birth to the first reliable behavioural audiogram. Method This study included a sample of 91 children who were diagnosed with UHL via newborn hearing screening and had reliable behavioural audiograms before 7 years of age. Information about diagnosis, audiological characteristics and etiology were extracted from clinical reports. Regression analysis was used to explore the potential reasons influencing the age at which first reliable behavioural audiograms were obtained. Correlation and ANOVA analyses were conducted to examine changes in hearing at octave frequencies between 0.5 and 4 kHz. The proportions of hearing loss change, as well as the clinical characteristics of children with and without progressive hearing loss, were described according to two adopted definitions: Definition 1: criterion (1): a decrease in 10 dB or greater at two or more adjacent frequencies between 0.5 and 4 kHz, or criterion (2): a decrease in 15 dB or greater at one octave frequency in the same frequency range. Definition 2: a change of ≥20 dB in the average of pure-tone thresholds at 0.5, 1, and 2 kHz. Results The study revealed that 48 children (52.7% of the sample of 91 children) had their first reliable behavioural audiogram by 3 years of age. The mean age at the first reliable behavioural audiogram was 3.0 years (SD 1.4; IQR: 1.8, 4.1). We found a significant association between children's behaviour and the presence or absence of ongoing middle ear issues in relation to the delay in obtaining a reliable behavioural audiogram. When comparing the hearing thresholds at diagnosis with the first reliable behavioural audiogram across different frequencies, it was observed that the majority of children experienced deterioration rather than improvement in the initial impaired ear at each frequency. Notably, there were more instances of hearing changes (either deterioration or improvement), in the 500 Hz and 1,000 Hz frequency ranges compared to the 2,000 Hz and 4,000 Hz ranges. Seventy-eight percent (n = 71) of children had hearing deterioration between the diagnosis and the first behavioural audiogram at one or more frequencies between 0.5 and 4 kHz, with a high proportion of them (52 out of the 71, 73.2%) developing severe to profound hearing loss. When using the averaged three frequency thresholds (i.e., definition 2), only 26.4% of children (n = 24) in the sample were identified as having hearing deterioration. Applying definition 2 therefore underestimates the proportion of children that experienced hearing changes. The study also reported diverse characteristics of children with or without hearing deterioration. Conclusion The finding that 78% of children diagnosed with UHL at birth had a decrease in hearing loss between the hearing levels at first diagnosis and their first behavioural audiogram highlights the importance of monitoring hearing threshold levels after diagnosis, so that appropriate intervention can be implemented in a timely manner. For clinical management, deterioration of 15 dB at one or more frequencies that does not recover warrants action.
Collapse
Affiliation(s)
- Vicky W. Zhang
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Sanna Hou
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
| | - Angela Wong
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
| | - Christopher Flynn
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- Lutwyche centre, Hearing Australia, Brisbane, QLD, Australia
| | - Jane Oliver
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- Upper Mt Gravatt centre, Hearing Australia, Brisbane, QLD, Australia
| | - Michelle Weiss
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- Dandenong centre, Hearing Australia, Melbourne, VIC, Australia
| | - Stacey Milner
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- Cheltenham centre, Hearing Australia, Melbourne, VIC, Australia
| | - Teresa Y. C. Ching
- Audiological Science Department, National Acoustic Laboratories, Sydney, NSW, Australia
- NextSense Institute, Macquarie Park, Sydney, NSW, Australia
- Macquarie School of Education, Macquarie University, Sydney, NSW, Australia
- School of Health and Rehabilitation Sciences, University of Queensland, St Lucia, QLD, Australia
| |
Collapse
|
13
|
Visram AS, Stone MA, Purdy SC, Bell SL, Brooks J, Bruce IA, Chesnaye MA, Dillon H, Harte JM, Hudson CL, Laugesen S, Morgan RE, O’Driscoll M, Roberts SA, Roughley AJ, Simpson D, Munro KJ. Aided Cortical Auditory Evoked Potentials in Infants With Frequency-Specific Synthetic Speech Stimuli: Sensitivity, Repeatability, and Feasibility. Ear Hear 2023; 44:1157-1172. [PMID: 37019441 PMCID: PMC10426785 DOI: 10.1097/aud.0000000000001352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 01/27/2023] [Indexed: 04/07/2023]
Abstract
OBJECTIVES The cortical auditory evoked potential (CAEP) test is a candidate for supplementing clinical practice for infant hearing aid users and others who are not developmentally ready for behavioral testing. Sensitivity of the test for given sensation levels (SLs) has been reported to some degree, but further data are needed from large numbers of infants within the target age range, including repeat data where CAEPs were not detected initially. This study aims to assess sensitivity, repeatability, acceptability, and feasibility of CAEPs as a clinical measure of aided audibility in infants. DESIGN One hundred and three infant hearing aid users were recruited from 53 pediatric audiology centers across the UK. Infants underwent aided CAEP testing at age 3 to 7 months to a mid-frequency (MF) and (mid-)high-frequency (HF) synthetic speech stimulus. CAEP testing was repeated within 7 days. When developmentally ready (aged 7-21 months), the infants underwent aided behavioral hearing testing using the same stimuli, to estimate the decibel (dB) SL (i.e., level above threshold) of those stimuli when presented at the CAEP test sessions. Percentage of CAEP detections for different dB SLs are reported using an objective detection method (Hotellings T 2 ). Acceptability was assessed using caregiver interviews and a questionnaire, and feasibility by recording test duration and completion rate. RESULTS The overall sensitivity for a single CAEP test when the stimuli were ≥0 dB SL (i.e., audible) was 70% for the MF stimulus and 54% for the HF stimulus. After repeat testing, this increased to 84% and 72%, respectively. For SL >10 dB, the respective MF and HF test sensitivities were 80% and 60% for a single test, increasing to 94% and 79% for the two tests combined. Clinical feasibility was demonstrated by an excellent >99% completion rate, and acceptable median test duration of 24 minutes, including preparation time. Caregivers reported overall positive experiences of the test. CONCLUSIONS By addressing the clinical need to provide data in the target age group at different SLs, we have demonstrated that aided CAEP testing can supplement existing clinical practice when infants with hearing loss are not developmentally ready for traditional behavioral assessment. Repeat testing is valuable to increase test sensitivity. For clinical application, it is important to be aware of CAEP response variability in this age group.
Collapse
Affiliation(s)
- Anisa S. Visram
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Michael A. Stone
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Suzanne C. Purdy
- School of Psychology, University of Auckland, Auckland, New Zealand
| | - Steven L. Bell
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, United Kingdom
| | - Jo Brooks
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Iain A. Bruce
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Michael A. Chesnaye
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, United Kingdom
| | - Harvey Dillon
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Department of Linguistics, Macquarie University, Sydney, Australia
| | - James M. Harte
- Interacoustics Research Unit, c/o Technical University of Denmark, Denmark
- Eriksholm Research Centre, Denmark
| | - Caroline L. Hudson
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Søren Laugesen
- Interacoustics Research Unit, c/o Technical University of Denmark, Denmark
| | - Rhiannon E. Morgan
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Martin O’Driscoll
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - Stephen A. Roberts
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
| | - Amber J. Roughley
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| | - David Simpson
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, United Kingdom
| | - Kevin J. Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, United Kingdom
| |
Collapse
|
14
|
Koerner TK, Gallun FJ. Speech understanding and extended high-frequency hearing sensitivity in blast-exposed veteransa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:379-387. [PMID: 37462921 DOI: 10.1121/10.0020174] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/29/2023] [Indexed: 07/21/2023]
Abstract
Auditory difficulties reported by normal-hearing Veterans with a history of blast exposure are primarily thought to stem from processing deficits in the central nervous system. However, previous work on speech understanding in noise difficulties in this patient population have only considered peripheral hearing thresholds in the standard audiometric range. Recent research suggests that variability in extended high-frequency (EHF; >8 kHz) hearing sensitivity may contribute to speech understanding deficits in normal-hearing individuals. Therefore, this work was designed to identify the effects of blast exposure on several common clinical speech understanding measures and EHF hearing sensitivity. This work also aimed to determine whether variability in EHF hearing sensitivity contributes to speech understanding difficulties in normal-hearing blast-exposed Veterans. Data from 41 normal- or near-normal-hearing Veterans with a history of blast exposure and 31 normal- or near-normal-hearing control participants with no history of head injury were employed in this study. Analysis identified an effect of blast exposure on several speech understanding measures but showed no statistically significant differences in EHF thresholds between participant groups. Data showed that variability in EHF hearing sensitivity did not contribute to group-related differences in speech understanding, although study limitations impact interpretation of these results.
Collapse
Affiliation(s)
- Tess K Koerner
- Department of Veterans Affairs (VA) Rehabilitation Research and Development (RR & D), National Center for Rehabilitative Auditory Research (NCRAR), VA Portland Health Care System, Portland, Oregon 97239, USA
| | - Frederick J Gallun
- Department of Veterans Affairs (VA) Rehabilitation Research and Development (RR & D), National Center for Rehabilitative Auditory Research (NCRAR), VA Portland Health Care System, Portland, Oregon 97239, USA
| |
Collapse
|
15
|
Walker EA. The Importance of High-Frequency Bandwidth on Speech and Language Development in Children: A Review of Patricia Stelmachowicz's Contributions to Pediatric Audiology. Semin Hear 2023; 44:S3-S16. [PMID: 36970651 PMCID: PMC10033203 DOI: 10.1055/s-0043-1764138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023] Open
Abstract
We review the literature related to Patricia Stelmachowicz's research in pediatric audiology, specifically focusing on the influence of audibility in language development and acquisition of linguistic rules. Pat Stelmachowicz spent her career increasing our awareness and understanding of children with mild to severe hearing loss who use hearing aids. Using a variety of novel experiments and stimuli, Pat and her colleagues produced a robust body of evidence to support the hypothesis that development moderates the role of frequency bandwidth on speech perception, particularly for fricative sounds. The prolific research that came out of Pat's lab had several important implications for clinical practice. First, her work highlighted that children require access to more high-frequency speech information than adults in the detection and identification of fricatives such as /s/ and /z/. These high-frequency speech sounds are important for morphological and phonological development. Consequently, the limited bandwidth of conventional hearing aids may delay the formation of linguistic rules in these two domains for children with hearing loss. Second, it emphasized the importance of not merely applying adult findings to the clinical decision-making process in pediatric amplification. Clinicians should use evidence-based practices to verify and provide maximum audibility for children who use hearing aids to acquire spoken language.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa
| |
Collapse
|
16
|
Zaar J, Simonsen LB, Dau T, Laugesen S. Toward a clinically viable spectro-temporal modulation test for predicting supra-threshold speech reception in hearing-impaired listeners. Hear Res 2023; 427:108650. [PMID: 36463632 DOI: 10.1016/j.heares.2022.108650] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 11/05/2022] [Accepted: 11/12/2022] [Indexed: 11/23/2022]
Abstract
The ability of hearing-impaired listeners to detect spectro-temporal modulation (STM) has been shown to correlate with individual listeners' speech reception performance. However, the STM detection tests used in previous studies were overly challenging especially for elderly listeners with moderate-to-severe hearing loss. Furthermore, the speech tests considered as a reference were not optimized to yield ecologically valid outcomes that represent real-life speech reception deficits. The present study investigated an STM detection measurement paradigm with individualized audibility compensation, focusing on its clinical viability and relevance as a real-life supra-threshold speech intelligibility predictor. STM thresholds were measured in 13 elderly hearing-impaired native Danish listeners using four previously established (noise-carrier based) and two novel complex-tone carrier based STM stimulus variants. Speech reception thresholds (SRTs) were measured (i) in a realistic spatial speech-on-speech set up and (ii) using co-located stationary noise, both with individualized amplification. In contrast with previous related studies, the proposed measurement paradigm yielded robust STM thresholds for all listeners and conditions. The STM thresholds were positively correlated with the SRTs, whereby significant correlations were found for the realistic speech-test condition but not for the stationary-noise condition. Three STM stimulus variants (one noise-carrier based and two complex-tone based) yielded significant predictions of SRTs, accounting for up to 53% of the SRT variance. The results of the study could form the basis for a clinically viable STM test for quantifying supra-threshold speech reception deficits in aided hearing-impaired listeners.
Collapse
Affiliation(s)
- Johannes Zaar
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark; Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark.
| | | | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Søren Laugesen
- Interacoustics Research Unit, DK-2800, Kgs. Lyngby, Denmark
| |
Collapse
|
17
|
Zheng C, Xu C, Wang M, Li X, Moore BCJ. Evaluation of deep marginal feedback cancellation for hearing aids using speech and music. Trends Hear 2023; 27:23312165231192290. [PMID: 37551089 PMCID: PMC10408330 DOI: 10.1177/23312165231192290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Accepted: 05/22/2023] [Indexed: 08/09/2023] Open
Abstract
Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.
Collapse
Affiliation(s)
- Chengshi Zheng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Chenyang Xu
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Meihuang Wang
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaodong Li
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Brian C. J. Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
18
|
Monson BB, Buss E. On the use of the TIMIT, QuickSIN, NU-6, and other widely used bandlimited speech materials for speech perception experiments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1639. [PMID: 36182310 PMCID: PMC9473723 DOI: 10.1121/10.0013993] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/20/2022] [Accepted: 08/20/2022] [Indexed: 05/29/2023]
Abstract
The use of spectrally degraded speech signals deprives listeners of acoustic information that is useful for speech perception. Several popular speech corpora, recorded decades ago, have spectral degradations, including limited extended high-frequency (EHF) (>8 kHz) content. Although frequency content above 8 kHz is often assumed to play little or no role in speech perception, recent research suggests that EHF content in speech can have a significant beneficial impact on speech perception under a wide range of natural listening conditions. This paper provides an analysis of the spectral content of popular speech corpora used for speech perception research to highlight the potential shortcomings of using bandlimited speech materials. Two corpora analyzed here, the TIMIT and NU-6, have substantial low-frequency spectral degradation (<500 Hz) in addition to EHF degradation. We provide an overview of the phenomena potentially missed by using bandlimited speech signals, and the factors to consider when selecting stimuli that are sensitive to these effects.
Collapse
Affiliation(s)
- Brian B Monson
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, Illinois 61820, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27514, USA
| |
Collapse
|
19
|
Idiopathic sudden sensorineural hearing loss: A critique on corticosteroid therapy. Hear Res 2022; 422:108565. [PMID: 35816890 DOI: 10.1016/j.heares.2022.108565] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 06/10/2022] [Accepted: 06/25/2022] [Indexed: 11/22/2022]
Abstract
Idiopathic sudden sensorineural hearing loss (ISSNHL) is a condition affecting 5-30 per 100,000 individuals with the potential to significantly reduce one's quality of life. The true incidence of this condition is not known because it often goes undiagnosed and/or recovers within a few days. ISSNHL is defined as a ≥30 dB loss of hearing over 3 consecutive audiometric octaves within 3 days with no known cause. The disorder is typically unilateral and most of the cases spontaneously recover to functional hearing within 30 days. High frequency losses, ageing, and vertigo are associated with a poorer prognosis. Multiple causes of ISSNHL have been postulated and the most common are vascular obstruction, viral infection, or labyrinthine membrane breaks. Corticosteroids are the standard treatment option but this practice is not without opposition. Post mortem analyses of temporal bones of ISSNHL cases have been inconclusive. This report analyzed ISSNHL studies administering corticosteroids that met strict inclusion criteria and identified a number of methodologic shortcomings that compromise the interpretation of results. We discuss the issues and conclude that the data do not support present treatment practices. The current status on ISSNHL calls for a multi-institutional, randomized, double-blind trial with validated outcome measures to provide science-based treatment guidance.
Collapse
|
20
|
Jain S, Narne VK, Nataraja NP, Madhukesh S, Kumar K, Moore BCJ. The effect of age and hearing sensitivity at frequencies above 8 kHz on auditory stream segregation and speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:716. [PMID: 35931505 DOI: 10.1121/10.0012917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 07/07/2022] [Indexed: 06/06/2023]
Abstract
The effects of age and mild hearing loss over the extended high-frequency (EHF) range from 9000 to 16 000 Hz on speech perception and auditory stream segregation were assessed using four groups: (1) young with normal hearing threshold levels (HTLs) over both the conventional and EHF range; (2) older with audiograms matched to those for group 1; (3) young with normal HTLs over the conventional frequency range and elevated HTLs over the EHF range; (4) older with audiograms matched to those for group 3. For speech in quiet, speech recognition thresholds and speech identification scores did not differ significantly across groups. For monosyllables in noise, both greater age and hearing loss over the EHF range adversely affected performance, but the effect of age was much larger than the effect of hearing status. Stream segregation was assessed using a rapid sequence of vowel stimuli differing in fundamental frequency (F0). Larger differences in F0 were required for stream segregation for the two groups with impaired hearing in the EHF range, but there was no significant effect of age. It is argued that impaired hearing in the EHF range is associated with impaired auditory function at lower frequencies, despite normal audiometric thresholds at those frequencies.
Collapse
Affiliation(s)
- Saransh Jain
- All India Institute of Speech and Hearing, University of Mysore, Mysuru-570006 (Kar.), India
| | - Vijaya Kumar Narne
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - N P Nataraja
- JSS Institute of Speech and Hearing, University of Mysore, Mysuru-570004 (Kar.), India
| | - Sanjana Madhukesh
- Department of Speech and Hearing, Manipal College of Health Professionals, Manipal-576104 (Kar.), India
| | - Kruthika Kumar
- District Disabled Rehabilitation Centre, Chikmagalur-577126 (Kar.), India
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, United Kingdom
| |
Collapse
|
21
|
Narne VK, Sreejith V S, Tiwari N. Long-Term Average Speech Spectra and Dynamic Ranges of 17 Indian Languages. Am J Audiol 2021; 30:1096-1107. [PMID: 34752152 DOI: 10.1044/2021_aja-21-00125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE In this work, we have determined the long-term average speech spectra (LTASS) and dynamic ranges (DR) of 17 Indian languages. This work is important because LTASS and DR are language-dependent functions used to fit hearing aids, calculate the Speech Intelligibility Index, and recognize speech automatically. Currently, LTASS and DR functions for English are used to fit hearing aids in India. Our work may help improve the performance of hearing aids in the Indian context. METHOD Speech samples from native talkers were used as stimuli in this study. Each speech sample was initially cleaned for extraneous sounds and excessively long pauses. Next, LTASS and DR functions for each language were calculated for different frequency bands. Similar analysis was also performed for English for reference purposes. Two-way analysis of variance was also conducted to understand the effects of important parameters on LTASS and DR. Finally, a one-sample t test was conducted to assess the significance of important statistical attributes of our data. RESULTS We showed that LTASS and DR for Indian languages are 5-10 dB and 11 dB less than those for English. These differences may be due to lesser use rate of high-frequency dominant phonemes and preponderance of vowel-ending words in Indian languages. We also showed that LTASS and DR do not differ significantly across Indian languages. Hence, we propose a common LTASS and DR for Indian languages. CONCLUSIONS We showed that differences in LTASS and DR for Indian languages vis-à-vis English are large and significant. Such differences may be attributed to phonetic and linguistic characteristics of Indian languages.
Collapse
Affiliation(s)
- Vijaya Kumar Narne
- Dhwani Laboratory, Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India
| | - Sreejith V S
- Dhwani Laboratory, Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India
| | - Nachiketa Tiwari
- Dhwani Laboratory, Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India
| |
Collapse
|
22
|
The Importance of Extended High-Frequency Speech Information in the Recognition of Digits, Words, and Sentences in Quiet and Noise. Ear Hear 2021; 43:913-920. [PMID: 34772838 DOI: 10.1097/aud.0000000000001142] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES In pure-tone audiometry, hearing thresholds are typically measured up to 8 kHz. Recent research has shown that extended high-frequency (EHF; frequencies >8 kHz) speech information improves speech recognition. However, it is unclear whether the EHF benefit is present for different types of speech material. This study assesses the added value of EHF information for speech recognition in noise for digit triplets, consonant-vowel-consonant (CVC) words, and sentences; and for speech recognition in quiet for CVC. DESIGN Twenty-four young adults with normal-hearing thresholds up to 16 kHz performed a listening experiment in quiet and in noise in a within-subject repeated measures design. Stimuli were presented monaurally. Steady state speech-shaped noise at a fixed signal to noise ratio was used for measurements in noise. Listening conditions varied only in terms of available EHF information. Stimuli were presented in three different conditions: (1) both speech and noise broadband, (2) speech broadband and noise low-pass filtered at 8 kHz, and (3) both speech and noise low-pass filtered at 8 kHz. In the speech-in-quiet experiment, stimuli (CVC) were high-pass filtered at 3 kHz and presented in two conditions: (1) with EHF information and (2) without EHF information. RESULTS In the speech-in-noise experiment, for all speech material, the highest scores were achieved in the condition where the noise was low-pass filtered at 8 kHz and speech unfiltered; the lowest scores were obtained in the condition where both speech and noise were low-pass filtered at 8 kHz. Adding speech frequencies above 8 kHz improved the median recognition scores by 75.0%, 21.8%, and 23.8% for digit triplets, words, and sentences, respectively, at a fixed signal to noise ratio. In the speech-in-quiet experiment, median recognition scores were 7.8% higher in the condition where the EHF information was available, as opposed to when it was not. CONCLUSIONS Speech information for frequencies above 8 kHz contributes to speech recognition in noise. It also contributes to speech recognition in quiet when information below 3 kHz is absent. Our results suggest that EHFs may be relevant in challenging listening conditions and should be measured in pure-tone audiometry to get a complete picture of a person's hearing. Further, results of speech recognition tests may vary when different recording and/or measurement equipment is used with different frequency responses above 8 kHz.
Collapse
|
23
|
Narne VK, Tiwari N. Cross-language comparison of long-term average speech spectrum and dynamic range for three Indian languages and British English. CLINICAL ARCHIVES OF COMMUNICATION DISORDERS 2021; 6:127-134. [DOI: 10.21849/cacd.2021.00465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 08/31/2021] [Indexed: 11/19/2022]
Abstract
Purpose: The Long-Term Average Speech Spectrum (LTASS) and Dynamic Range (DR) of speech strongly influence estimates of Speech Intelligibility Index (SII), gain and compression required for hearing aid fitting. It is also known that acoustic and linguistic characteristics of a language have a bearing on its LTASS and DR. Thus, there is a need to estimate LTASS and DR for Indian languages. The present work on three Indian languages fills this gap and contrasts LTASS and DR attributes of these languages against British English.Methods: For this purpose, LTASS and DR were measured for 21 one-third octave bands in the frequency range of 0.1 to 10 kHz for Hindi, Kannada, Indian English and British English.Results: Our work shows that the DR of Indian languages studied is 7-10 dB less relative to that of British English. We also report that LTASS levels for Indian languages are 7 dB lower relative to British English for frequencies above 1 kHz. Finally, we observed that LTASS and DR attributes across genders were more or less the same.Conclusions: Given the evidence presented in this work that LTASS and DR characteristics for Indian languages analyzed are markedly different than those for BE, there is a need to determine Indian language specific SII, as well as gain and compression parameters used in hearing aids.
Collapse
|
24
|
McLean WJ, Hinton AS, Herby JT, Salt AN, Hartsock JJ, Wilson S, Lucchino DL, Lenarz T, Warnecke A, Prenzler N, Schmitt H, King S, Jackson LE, Rosenbloom J, Atiee G, Bear M, Runge CL, Gifford RH, Rauch SD, Lee DJ, Langer R, Karp JM, Loose C, LeBel C. Improved Speech Intelligibility in Subjects With Stable Sensorineural Hearing Loss Following Intratympanic Dosing of FX-322 in a Phase 1b Study. Otol Neurotol 2021; 42:e849-e857. [PMID: 33617194 PMCID: PMC8279894 DOI: 10.1097/mao.0000000000003120] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVES There are no approved pharmacologic therapies for chronic sensorineural hearing loss (SNHL). The combination of CHIR99021+valproic acid (CV, FX-322) has been shown to regenerate mammalian cochlear hair cells ex vivo. The objectives were to characterize the cochlear pharmacokinetic profile of CV in guinea pigs, then measure FX-322 in human perilymph samples, and finally assess safety and audiometric effects of FX-322 in humans with chronic SNHL. STUDY DESIGNS Middle ear residence, cochlear distribution, and elimination profiles of FX-322 were assessed in guinea pigs. Human perilymph sampling following intratympanic FX-322 dosing was performed in an open-label study in cochlear implant subjects. Unilateral intratympanic FX-322 was assessed in a Phase 1b prospective, randomized, double-blinded, placebo-controlled clinical trial. SETTING Three private otolaryngology practices in the US. PATIENTS Individuals diagnosed with mild to moderately severe chronic SNHL (≤70 dB standard pure-tone average) in one or both ears that was stable for ≥6 months, medical histories consistent with noise-induced or idiopathic sudden SNHL, and no significant vestibular symptoms. INTERVENTIONS Intratympanic FX-322. MAIN OUTCOME MEASURES Pharmacokinetics of FX-322 in perilymph and safety and audiometric effects. RESULTS After intratympanic delivery in guinea pigs and humans, FX-322 levels in the cochlear extended high-frequency region were observed and projected to be pharmacologically active in humans. A single dose of FX-322 in SNHL subjects was well tolerated with mild, transient treatment-related adverse events (n = 15 FX-322 vs 8 placebo). Of the six patients treated with FX-322 who had baseline word recognition in quiet scores below 90%, four showed clinically meaningful improvements (absolute word recognition improved 18-42%, exceeding the 95% confidence interval determined by previously published criteria). No significant changes in placebo-injected ears were observed. At the group level, FX-322 subjects outperformed placebo group in word recognition in quiet when averaged across all time points, with a mean improvement from baseline of 18.9% (p = 0.029). For words in noise, the treated group showed a mean 1.3 dB signal-to-noise ratio improvement (p = 0.012) relative to their baseline scores while placebo-treated subjects did not (-0.21 dB, p = 0.71). CONCLUSIONS Delivery of FX-322 to the extended high-frequency region of the cochlea is well tolerated and enhances speech recognition performance in multiple subjects with stable chronic hearing loss.
Collapse
Affiliation(s)
- Will J. McLean
- Frequency Therapeutics, Woburn, MA & Farmington, CT
- Department of Surgery, University of Connecticut School of Medicine, Farmington, CT
| | | | | | - Alec N. Salt
- Department of Otolaryngology, Central Institute for the Deaf, Fay and Carl Simons Center for Hearing and Deafness, Washington University School of Medicine, Saint Louis, MO
| | - Jared J. Hartsock
- Department of Otolaryngology, Central Institute for the Deaf, Fay and Carl Simons Center for Hearing and Deafness, Washington University School of Medicine, Saint Louis, MO
| | - Sam Wilson
- Frequency Therapeutics, Woburn, MA & Farmington, CT
| | | | - Thomas Lenarz
- Department of Otolaryngology and Cluster of Excellence of the German Research Foundation “Hearing4all”, Hannover Medical School, Hannover, Germany
| | - Athanasia Warnecke
- Department of Otolaryngology and Cluster of Excellence of the German Research Foundation “Hearing4all”, Hannover Medical School, Hannover, Germany
| | - Nils Prenzler
- Department of Otolaryngology and Cluster of Excellence of the German Research Foundation “Hearing4all”, Hannover Medical School, Hannover, Germany
| | - Heike Schmitt
- Department of Otolaryngology and Cluster of Excellence of the German Research Foundation “Hearing4all”, Hannover Medical School, Hannover, Germany
| | | | | | | | | | | | - Christina L. Runge
- Department of Otolaryngology and Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Steven D. Rauch
- Department of Otolaryngology, Harvard Medical School and Massachusetts Eye and Ear, Boston
| | - Daniel J. Lee
- Department of Otolaryngology, Harvard Medical School and Massachusetts Eye and Ear, Boston
| | - Robert Langer
- Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA
| | - Jeffrey M. Karp
- Center for Nanomedicine, Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard Medical School Boston MA
- Harvard-MIT Division of Health Science and Technology
- Harvard Stem Cell Institute, Harvard University, Cambridge, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA
| | | | - Carl LeBel
- Frequency Therapeutics, Woburn, MA & Farmington, CT
| |
Collapse
|
25
|
Wiatr A, Wiatr M. Influence of Changes in Bone-Conduction Thresholds on Speech Audiometry in Patients Who Underwent Surgery for Otosclerosis. J Int Adv Otol 2020; 16:353-357. [PMID: 33136015 DOI: 10.5152/iao.2020.8139] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
OBJECTIVES Otosclerosis is an underlying disease of the bony labyrinth that results in hearing loss. In some cases, the involvement of the bony part of the cochlea results in mixed hearing loss. The aim of this analysis was to seek a correlation between the results of speech audiometry tests and the changes in bone-conduction thresholds observed after surgical treatment. MATERIALS AND METHODS The analysis included 140 patients who were hospitalized and surgically treated for otosclerosis. The patients who were treated with stapedotomy were divided into subgroups based on the value of the bone-conduction threshold before the surgery. An audiological assessment was performed, with pure-tone threshold audiometry and speech audiometry tests taken into account. RESULTS The effectiveness of the surgery was judged by the change in the speech audiometry test results after 12 months of observation. After the surgery, it was found that a significant improvement, characterized as achieving 100% understanding of speech, occurred in 61.90% of the patients. CONCLUSION There is a correlation between the improvement in speech audiometry tests and bone-conduction curve after stapedotomy. The changes achieved in the bone-conduction curve at the frequency range up to 3,000 Hz (hertz) had a significant impact on the improvements in speech audiometry test results. Higher frequencies provide more data for improving the hearing process. A mean bone-conduction threshold between 21 and 40 dB (decibels) in the pure-tone audiometry examination performed before surgery is a favorable prognostic factor in the improvement of the bone-conduction threshold after surgery.
Collapse
Affiliation(s)
- Agnieszka Wiatr
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Poland
| | - Maciej Wiatr
- Department of Otolaryngology, Jagiellonian University Medical College in Kraków, Poland
| |
Collapse
|
26
|
Hunter LL, Monson BB, Moore DR, Dhar S, Wright BA, Munro KJ, Zadeh LM, Blankenship CM, Stiepan SM, Siegel JH. Extended high frequency hearing and speech perception implications in adults and children. Hear Res 2020; 397:107922. [PMID: 32111404 PMCID: PMC7431381 DOI: 10.1016/j.heares.2020.107922] [Citation(s) in RCA: 114] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 02/10/2020] [Accepted: 02/11/2020] [Indexed: 01/09/2023]
Abstract
Extended high frequencies (EHF), above 8 kHz, represent a region of the human hearing spectrum that is generally ignored by clinicians and researchers alike. This article is a compilation of contributions that, together, make the case for an essential role of EHF in both normal hearing and auditory dysfunction. We start with the fundamentals of biological and acoustic determinism - humans have EHF hearing for a purpose, for example, the detection of prey, predators, and mates. EHF hearing may also provide a boost to speech perception in challenging conditions and its loss, conversely, might help explain difficulty with the same task. However, it could be that EHF are a marker for damage in the conventional frequency region that is more related to speech perception difficulties. Measurement of EHF hearing in concert with otoacoustic emissions could provide an early warning of age-related hearing loss. In early life, when EHF hearing sensitivity is optimal, we can use it for enhanced phonetic identification during language learning, but we are also susceptible to diseases that can prematurely damage it. EHF audiometry techniques and standardization are reviewed, providing evidence that they are reliable to measure and provide important information for early detection, monitoring and possible prevention of hearing loss in populations at-risk. To better understand the full contribution of EHF to human hearing, clinicians and researchers can contribute by including its measurement, along with measures of speech in noise and self-report of hearing difficulties and tinnitus in clinical evaluations and studies.
Collapse
Affiliation(s)
- Lisa L Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, USA; Department of Otolaryngology, University of Cincinnati, USA.
| | - Brian B Monson
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, USA; Neuroscience Program, University of Illinois at Urbana-Champaign, USA
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, USA; Department of Otolaryngology, University of Cincinnati, USA; Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK
| | - Sumitrajit Dhar
- Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA; Knowles Hearing Center, Northwestern University, Evanston, IL, USA
| | - Beverly A Wright
- Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK
| | - Lina Motlagh Zadeh
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, USA
| | - Chelsea M Blankenship
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, USA
| | - Samantha M Stiepan
- Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA; Knowles Hearing Center, Northwestern University, Evanston, IL, USA
| | - Jonathan H Siegel
- Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA; Knowles Hearing Center, Northwestern University, Evanston, IL, USA
| |
Collapse
|
27
|
Fontan L, Le Coz M, Azzopardi C, Stone MA, Füllgrabe C. Improving hearing-aid gains based on automatic speech recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL227. [PMID: 33003882 DOI: 10.1121/10.0001866] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 08/13/2020] [Indexed: 06/11/2023]
Abstract
This study provides proof of concept that automatic speech recognition (ASR) can be used to improve hearing aid (HA) fitting. A signal-processing chain consisting of a HA simulator, a hearing-loss simulator, and an ASR system normalizing the intensity of input signals was used to find HA-gain functions yielding the highest ASR intelligibility scores for individual audiometric profiles of 24 listeners with age-related hearing loss. Significantly higher aided speech intelligibility scores and subjective ratings of speech pleasantness were observed when the participants were fitted with ASR-established gains than when fitted with the gains recommended by the CAM2 fitting rule.
Collapse
Affiliation(s)
- Lionel Fontan
- Archean LABS, 20 place Prax-Paris, 82000 Montauban, France
| | - Maxime Le Coz
- Archean LABS, 20 place Prax-Paris, 82000 Montauban, France
| | - Charlotte Azzopardi
- Ecole d'Audioprothèse de Cahors, Université Toulouse III Paul Sabatier, 31062 Toulouse, France
| | - Michael A Stone
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Oxford Road, Manchester, M139PL, United Kingdom
| | - Christian Füllgrabe
- School of Sport, Exercise and Health Sciences, Ashby Road, Loughborough University, Loughborough LE11 3TU, United , , , ,
| |
Collapse
|
28
|
Wang W, Stipp PN, Ouaras K, Fathi S, Huang YYS. Broad Bandwidth, Self-Powered Acoustic Sensor Created by Dynamic Near-Field Electrospinning of Suspended, Transparent Piezoelectric Nanofiber Mesh. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2020; 16:e2000581. [PMID: 32510871 DOI: 10.1002/smll.202000581] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/18/2020] [Accepted: 04/20/2020] [Indexed: 06/11/2023]
Abstract
Freely suspended nanofibers, such as spider silk, harnessing their small diameter (sub-micrometer) and spanning fiber morphology, behave as a nonresonating acoustic sensor. The associated sensing characteristics, departing from conventional resonant acoustic sensors, could be of tremendous interest for the development of high sensitivity, broadband audible sensors for applications in environmental monitoring, biomedical diagnostics, and internet-of-things. Herein, a low packing density, freely suspended nanofiber mesh with a piezoelectric active polymer is fabricated, demonstrating a self-powered acoustic sensing platform with broad sensitivity bandwidth covering 200-5000 Hz at hearing-safe sound pressure levels. Dynamic near-field electrospinning is developed to fabricate in situ poled poly(vinylidene fluoride-co-trifluoroethylene) (P(VDF-TrFE)) nanofiber mesh (average fiber diameter ≈307 nm), exhibiting visible light transparency greater than 97%. With the ability to span the nanomesh across a suspension distance of 3 mm with minimized fiber stacking (≈18% fiber packing density), individual nanofibers can freely imitate the acoustic-driven fluctuation of airflow in a collective manner, where piezoelectricity is harvested at two-terminal electrodes for direct signal collection. Applications of the nanofiber mesh in music recording with good signal fidelity are demonstrated.
Collapse
Affiliation(s)
- Wenyu Wang
- The Nanoscience Center, Department of Engineering, University of Cambridge, Cambridge, CB3 0FF, UK
| | - Patrick N Stipp
- The Nanoscience Center, Department of Engineering, University of Cambridge, Cambridge, CB3 0FF, UK
- Institute of Robotics and Intelligent Systems, Swiss Federal Institute of Technology Zurich (ETH), Rämistrasse 101, Zürich, 8092, Switzerland
| | - Karim Ouaras
- The Nanoscience Center, Department of Engineering, University of Cambridge, Cambridge, CB3 0FF, UK
| | - Saeed Fathi
- The Nanoscience Center, Department of Engineering, University of Cambridge, Cambridge, CB3 0FF, UK
| | - Yan Yan Shery Huang
- The Nanoscience Center, Department of Engineering, University of Cambridge, Cambridge, CB3 0FF, UK
| |
Collapse
|
29
|
Van Eeckhoutte M, Folkeard P, Glista D, Scollie S. Speech recognition, loudness, and preference with extended bandwidth hearing aids for adult hearing aid users. Int J Audiol 2020; 59:780-791. [PMID: 32309996 DOI: 10.1080/14992027.2020.1750718] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objective: In contrast to the past, some current hearing aids can provide gain for frequencies above 4-5 kHz. This study assessed the effect of wider bandwidth on outcome measures using hearing aids fitted with the DSL v5.0 prescription.Design: There were two conditions: an extended bandwidth condition, for which the maximum available bandwidth was provided, and a restricted bandwidth condition, in which gain was reduced for frequencies above 4.5 kHz. Outcome measures were assessed in both conditions.Study sample: Twenty-four participants with mild-to-moderately-severe sensorineural high-frequency sloping hearing loss.Results: Providing extended bandwidth resulted in maximum audible output frequency values of 7.5 kHz on average for an input level of 65 dB SPL. An improvement in consonant discrimination scores (4.1%), attributable to better perception of /s/, /z/, and /t/ phonemes, was found in the extended bandwidth condition, but no significant change in loudness perception or preferred listening levels was found. Most listeners (79%) had either no preference (33%) or some preference for the extended bandwidth condition (46%).Conclusions: The results suggest that providing the maximum bandwidth available with modern hearing aids fitted with DSL v5.0, using targets from 0.25 to 8 kHz, can be beneficial for the tested population.
Collapse
Affiliation(s)
| | - Paula Folkeard
- National Centre for Audiology, Western University, London, Canada
| | - Danielle Glista
- National Centre for Audiology, Western University, London, Canada.,Communication Sciences and Disorders, Faculty of Health Sciences, Western University, London, Canada
| | - Susan Scollie
- National Centre for Audiology, Western University, London, Canada.,Communication Sciences and Disorders, Faculty of Health Sciences, Western University, London, Canada
| |
Collapse
|
30
|
Moore DR, Whiston H, Lough M, Marsden A, Dillon H, Munro KJ, Stone MA. FreeHear: A New Sound-Field Speech-in-Babble Hearing Assessment Tool. Trends Hear 2020; 23:2331216519872378. [PMID: 31599206 PMCID: PMC6787881 DOI: 10.1177/2331216519872378] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Pure-tone threshold audiometry is currently the standard test of hearing.
However, in everyday life, we are more concerned with listening to speech of
moderate loudness and, specifically, listening to a particular talker against a
background of other talkers. FreeHear delivers strings of three spoken digits
(0–9, not 7) against a background babble via three loudspeakers placed in front
and to either side of a listener. FreeHear is designed as a rapid, quantitative
initial assessment of hearing using an adaptive algorithm. It is designed
especially for children and for testing listeners who are using hearing devices.
In this first report on FreeHear, we present developmental considerations and
protocols and results of testing 100 children (4–13 years old) and 23 adults
(18–30 years old). Two of the six 4 year olds and 91% of all older children
completed full testing. Speech reception threshold (SRT) for digits and noise
colocated at 0° or separated by 90° both improved linearly across 4 to 12 years
old by 6 to 7 dB, with a further 2 dB improvement for the adults. These data
suggested full maturation at approximately 15 years old SRTs at 90° digits/noise
separation were better by approximately 6 dB than SRTs colocated at 0°. This
spatial release from masking did not change significantly across age.
Test–retest reliability was similar for children and adults (standard deviation
of 2.05–2.91 dB SRT), with a mean practice improvement of 0.04–0.98 dB. FreeHear
shows promise as a clinical test for both children and adults. Further trials in
people with hearing impairment are ongoing.
Collapse
Affiliation(s)
- David R Moore
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH, USA.,Department of Otolaryngology, University of Cincinnati College of Medicine, OH, USA
| | - Helen Whiston
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Melanie Lough
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Antonia Marsden
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Centre for Biostatistics, School of Health Sciences, The University of Manchester, UK
| | - Harvey Dillon
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Australian Hearing Hub, Macquarie University, Macquarie Park, Australia
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Michael A Stone
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| |
Collapse
|
31
|
Salorio-Corbetto M, Baer T, Stone MA, Moore BCJ. Effect of the number of amplitude-compression channels and compression speed on speech recognition by listeners with mild to moderate sensorineural hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1344. [PMID: 32237835 DOI: 10.1121/10.0000804] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 02/09/2020] [Indexed: 06/11/2023]
Abstract
The use of a large number of amplitude-compression channels in hearing aids has potential advantages, such as the ability to compensate for variations in loudness recruitment across frequency and provide appropriate frequency-response shaping. However, sound quality and speech intelligibility could be adversely affected due to reduction of spectro-temporal contrast and distortion, especially when fast-acting compression is used. This study assessed the effect of the number of channels and compression speed on speech recognition when the multichannel processing was used solely to implement amplitude compression, and not for frequency-response shaping. Computer-simulated hearing aids were used. The frequency-dependent insertion gains for speech with a level of 65 dB sound pressure level were applied using a single filter before the signal was filtered into compression channels. Fast-acting (attack, 10 ms; release, 100 ms) or slow-acting (attack, 50 ms; release, 3000 ms) compression using 3, 6, 12, and 22 channels was applied subsequently. Using a sentence recognition task with speech in two- and eight-talker babble at three different signal-to-babble ratios (SBRs), 20 adults with sensorineural hearing loss were tested. The number of channels and compression speed had no significant effect on speech recognition, regardless of babble type or SBR.
Collapse
Affiliation(s)
- Marina Salorio-Corbetto
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Thomas Baer
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Michael A Stone
- Division of Human Communication, Development and Hearing, University of Manchester, Oxford Road, Manchester M13 9PL, United Kingdom
| | - Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
32
|
Stone MA, Prendergast G, Canavan S. Measuring access to high-modulation-rate envelope speech cues in clinically fitted auditory prostheses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1284. [PMID: 32113270 DOI: 10.1121/10.0000673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 01/15/2020] [Indexed: 06/10/2023]
Abstract
The signal processing used to increase intelligibility within the hearing-impaired listener introduces distortions in the modulation patterns of a signal. Trade-offs have to be made between improved audibility and the loss of fidelity. Acoustic hearing impairment can cause reduced access to temporal fine structure (TFS), while cochlear implant processing, used to treat profound hearing impairment, has reduced ability to convey TFS, hence forcing greater reliance on modulation cues. Target speech mixed with a competing talker was split into 8-22 frequency channels. From each channel, separate low-rate (EmodL, <16 Hz) and high-rate (EmodH, <300 Hz) versions of the envelope modulation were extracted, which resulted in low or high intelligibility, respectively. The EModL modulations were preserved in channel valleys and cross-faded to EModH in channel peaks. The cross-faded signal modulated a tone carrier in each channel. The modulated carriers were summed across channels and presented to hearing aid (HA) and cochlear implant users. Their ability to access high-rate modulation cues and the dynamic range of this access was assessed. Clinically fitted hearing aids resulted in 10% lower intelligibility than simulated high-quality aids. Encouragingly, cochlear implantees were able to extract high-rate information over a dynamic range similar to that for the HA users.
Collapse
Affiliation(s)
- Michael A Stone
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, M13 9PL, United Kingdom
| | - Garreth Prendergast
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, M13 9PL, United Kingdom
| | - Shanelle Canavan
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, M13 9PL, United Kingdom
| |
Collapse
|
33
|
Abstract
Supplemental Digital Content is available in the text. Objectives: The objective of this study was to test the ability to achieve, maintain, and subjectively benefit from extended high-frequency amplification in a real-world use scenario, with a device that restores audibility for frequencies up to 10 kHz. Design: A total of 78 participants (149 ears) with mild to moderately-severe sensorineural hearing loss completed one of two studies conducted across eight clinical sites. Participants were fitted with a light-driven contact hearing aid (the Earlens system) that directly drives the tympanic membrane, allowing extended high-frequency output and amplification with minimal acoustic feedback. Cambridge Method for Loudness Equalization 2 - High Frequency (CAM2)-prescribed gains for experienced users were used for initial fitting, and adjustments were made when required according to participant preferences for loudness and comfort or when measures of functional gain (FG) indicated that more or less gain was needed. Participants wore the devices for an extended period. Prescribed versus adjusted output and gain, frequency-specific FG, and self-perceived benefit assessed with the Abbreviated Profile of Hearing Aid Benefit, and a custom questionnaire were documented. Self-perceived benefit results were compared with those for unaided listening and to ratings with participants’ own acoustic hearing aids. Results: The prescribed low-level insertion gain from 6 to 10 kHz averaged 53 dB across all ears, with a range from 26 to 86 dB. After adjustment, the gain from 6 to 10 kHz decreased to an average of 45 dB with a range from 16 to 86 dB. Measured FG averaged 39 dB from 6 to 10 kHz with a range from 11 to 62 dB. Abbreviated Profile of Hearing Aid Benefit results revealed a significant improvement in communication relative to unaided listening, averaging 28 to 32 percentage points for the background noise, reverberation, and ease of communication subscales. Relative to participants’ own hearing aids, the subscales ease of communication and aversiveness showed small but significant improvements for Earlens ranging from 6 to 7 percentage points. For the custom satisfaction questionnaire, most participants rated the Earlens system as better than their own hearing aids in most situations. Conclusions: Participants used and reported subjective benefit from the Earlens system. Most participants preferred slightly less gain at 6 to 10 kHz than prescribed for experienced users by CAM2, preferring similar gains to those prescribed for inexperienced users, but gains over the extended high frequencies were high relative to those that are currently available with acoustic hearing aids.
Collapse
|
34
|
Salorio-Corbetto M, Baer T, Moore BCJ. Comparison of Frequency Transposition and Frequency Compression for People With Extensive Dead Regions in the Cochlea. Trends Hear 2019; 23:2331216518822206. [PMID: 30803386 PMCID: PMC6330725 DOI: 10.1177/2331216518822206] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
The objective was to determine the effects of two frequency-lowering algorithms (frequency transposition, FT, and frequency compression, FC) on audibility, speech identification, and subjective benefit, for people with high-frequency hearing loss and extensive dead regions (DRs) in the cochlea. A single-blind randomized crossover design was used. FT and FC were compared with each other and with a control condition (denoted ‘Control’) without frequency lowering, using hearing aids that were otherwise identical. Data were collected after at least 6 weeks of experience with a condition. Outcome measures were audibility, scores for consonant identification, scores for word-final /s, z/ detection (S test), sentence-in-noise intelligibility, and a questionnaire assessing self-perceived benefit (Spatial and Qualities of Hearing Scale). Ten adults with steeply sloping high-frequency hearing loss and extensive DRs were tested. FT and FC improved the audibility of some high-frequency sounds for 7 and 9 participants out of 10, respectively. At the group level, performance for FT and FC did not differ significantly from that for Control for any of the outcome measures. However, the pattern of consonant confusions varied across conditions. Bayesian analysis of the confusion matrices revealed a trend for FT to lead to more consistent error patterns than FC and Control. Thus, FT may have the potential to give greater benefit than Control or FC following extended experience or training.
Collapse
Affiliation(s)
| | - Thomas Baer
- 1 Department of Experimental Psychology, University of Cambridge, UK
| | - Brian C J Moore
- 1 Department of Experimental Psychology, University of Cambridge, UK
| |
Collapse
|
35
|
Monson BB, Rock J, Schulz A, Hoffman E, Buss E. Ecological cocktail party listening reveals the utility of extended high-frequency hearing. Hear Res 2019; 381:107773. [PMID: 31404807 DOI: 10.1016/j.heares.2019.107773] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 07/19/2019] [Accepted: 07/27/2019] [Indexed: 10/26/2022]
Abstract
A fundamental principle of neuroscience is that each species' and individual's sensory systems are tailored to meet the demands placed upon them by their environments and experiences. What has driven the upper limit of the human frequency range of hearing? The traditional view is that sensitivity to the highest frequencies (i.e., beyond 8 kHz) facilitates localization of sounds in the environment. However, this has yet to be demonstrated for naturally occurring non-speech sounds. An alternative view is that, for social species such as humans, the biological relevance of conspecific vocalizations has driven the development and retention of auditory system features. Here, we provide evidence for the latter theory. We evaluated the contribution of extended high-frequency (EHF) hearing to common ecological speech perception tasks. We found that restricting access to EHFs reduced listeners' discrimination of talker head orientation by approximately 34%. Furthermore, access to EHFs significantly improved speech recognition under listening conditions in which the target talker's head was facing the listener while co-located background talkers faced away from the listener. Our findings raise the possibility that sensitivity to the highest audio frequencies fosters communication and socialization of the human species. These findings suggest that loss of sensitivity to the highest frequencies may lead to deficits in speech perception. Such EHF hearing loss typically goes undiagnosed, but is widespread among the middle-aged population.
Collapse
Affiliation(s)
- Brian B Monson
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States; Neuroscience Program, University of Illinois at Urbana-Champaign, United States.
| | - Jenna Rock
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States
| | - Anneliese Schulz
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States
| | - Elissa Hoffman
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, United States
| |
Collapse
|
36
|
Jesteadt W, Wróblewski M, High R. Contribution of frequency bands to the loudness of broadband sounds: Tonal and noise stimuli. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3586. [PMID: 31255128 PMCID: PMC6584171 DOI: 10.1121/1.5111751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 05/06/2019] [Accepted: 05/28/2019] [Indexed: 06/09/2023]
Abstract
Contributions of individual frequency bands to judgments of total loudness can be assessed by varying the level of each band independently from one presentation to the next and determining the relation between the change in level of each band and the loudness judgment. In a previous study, measures of perceptual weight obtained in this way for noise stimuli consisting of 15 bands showed greater weight associated with the highest and lowest bands than loudness models would predict. This was true even for noise with the long-term average speech spectrum, where the highest band contained little energy. One explanation is that listeners were basing decisions on some attribute other than loudness. The current study replicated earlier results for noise stimuli and included conditions using 15 tones located at the center frequencies of the noise bands. Although the two types of stimuli sound very different, the patterns of perceptual weight were nearly identical, suggesting that both sets of results are based on loudness judgments and that the edge bands play an important role in those judgments. The importance of the highest band was confirmed in a loudness-matching task involving all combinations of noise and tonal stimuli.
Collapse
Affiliation(s)
- Walt Jesteadt
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Marcin Wróblewski
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Robin High
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha, Nebraska 68198, USA
| |
Collapse
|
37
|
Moore BCJ, Shaw S, Griffiths S, Stone MA, Sherlock Z. Evaluation of a system for enhancing mobile telephone communication for people with hearing loss. Int J Audiol 2019; 58:417-426. [DOI: 10.1080/14992027.2019.1590655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | | | - Stephen Griffiths
- Department of Health and Social Care Audiology Service, Nobles Hospital, Douglas, Isle of Man
| | - Michael A. Stone
- Manchester Centre for Audiology and Deafness, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| | | |
Collapse
|
38
|
Abstract
OBJECTIVE The main objective of this study is to obtain data assessing normative scores, test-retest reliability, critical differences, and the effect of age for two closed-set consonant-discrimination tests. DESIGN The two tests are intended for use with children aged 2 to 8 years. The tests were evaluated using normal-hearing children within the appropriate age range. The tests were (1) the closed-set consonant confusion test (CCT) and (2) the consonant-discrimination subtest of the closed-set Chear Auditory Perception Test (CAPT). Both were word-identification tests using stimuli presented at a low fixed level, chosen to avoid ceiling effects while avoiding the use of background noise. Each test was administered twice. RESULTS All children in the age range 3 years 2 months to 8 years 11 months gave meaningful scores and were able to respond reliably using a computer mouse or a touch screen to select one of four response options displayed on a screen for each trial. Assessment of test-retest reliability showed strong agreement between the two test runs (interclass correlation ≥ 0.8 for both tests). The critical differences were similar to those for other monosyllabic speech tests. Tables of these differences for the CCT and CAPT are provided for clinical use of the measures. Performance tended to improve with increasing age, especially for the CCT. Regression equations relating mean performance to age are given. CONCLUSIONS The CCT is appropriate for children with developmental age in the range 2 to 4.5 years and the CAPT is appropriate as a follow-on test from the CCT. If a child scores 80% or more on the CCT, they can be further tested using the CAPT, which contains more advanced vocabulary and more difficult contrasts. This allows the assessment of consonant perception ability and of changes over time or after an intervention.
Collapse
|
39
|
Stone MA, Visram A, Harte JM, Munro KJ. A Set of Time-and-Frequency-Localized Short-Duration Speech-Like Stimuli for Assessing Hearing-Aid Performance via Cortical Auditory-Evoked Potentials. Trends Hear 2019; 23:2331216519885568. [PMID: 31858885 PMCID: PMC6967206 DOI: 10.1177/2331216519885568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 08/27/2019] [Accepted: 09/23/2019] [Indexed: 11/17/2022] Open
Abstract
Short-duration speech-like stimuli, for example, excised from running speech, can be used in the clinical setting to assess the integrity of the human auditory pathway at the level of the cortex. Modeling of the cochlear response to these stimuli demonstrated an imprecision in the location of the spectrotemporal energy, giving rise to uncertainty as to what and when of a stimulus caused any evoked electrophysiological response. This article reports the development and assessment of four short-duration, limited-bandwidth stimuli centered at low, mid, mid-high, and high frequencies, suitable for free-field delivery and, in addition, reproduction via hearing aids. The durations were determined by the British Society of Audiology recommended procedure for measuring Cortical Auditory-Evoked Potentials. The levels and bandwidths were chosen via a computational model to produce uniform cochlear excitation over a width exceeding that likely in a worst-case hearing-impaired listener. These parameters produce robustness against errors in insertion gains, and variation in frequency responses, due to transducer imperfections, room modes, and age-related variation in meatal resonances. The parameter choice predicts large spectral separation between adjacent stimuli on the cochlea. Analysis of the signals processed by examples of recent digital hearing aids mostly show similar levels of gain applied to each stimulus, independent of whether the stimulus was presented in isolation, bursts, continuous, or embedded in continuous speech. These stimuli seem to be suitable for measuring hearing-aided Cortical Auditory-Evoked Potentials and have the potential to be of benefit in the clinical setting.
Collapse
Affiliation(s)
- Michael A. Stone
- Manchester Centre for Audiology and Deafness, School of Health
Sciences, University of Manchester, UK
- Manchester University Hospitals NHS Foundation Trust, UK
| | - Anisa Visram
- Manchester Centre for Audiology and Deafness, School of Health
Sciences, University of Manchester, UK
- Manchester University Hospitals NHS Foundation Trust, UK
| | - James M. Harte
- Interacoustics Research Unit, c/o Technical University of
Denmark, Lyngby, Denmark
| | - Kevin J. Munro
- Manchester Centre for Audiology and Deafness, School of Health
Sciences, University of Manchester, UK
- Manchester University Hospitals NHS Foundation Trust, UK
| |
Collapse
|
40
|
Brennan MA, Lewis D, McCreery R, Kopun J, Alexander JM. Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss. J Am Acad Audiol 2018; 28:823-837. [PMID: 28972471 DOI: 10.3766/jaaa.16158] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). PURPOSE To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. RESEARCH DESIGN Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. STUDY SAMPLE Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. INTERVENTION Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. DATA COLLECTION AND ANALYSIS Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. RESULTS Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age. CONCLUSION Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.
Collapse
Affiliation(s)
- Marc A Brennan
- Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Dawna Lewis
- Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Ryan McCreery
- Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Judy Kopun
- Amplification and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE
| | | |
Collapse
|
41
|
Abstract
OBJECTIVE Thresholds in the extended high-frequency (EHF) range (> 8 kHz) often worsen after otherwise successful stapedectomy. The aims of this study were to document the prevalence of hearing loss from 0.25 to 16 kHz after stapedectomy and the relative rates of transient and permanent EHF hearing loss. STUDY DESIGN Prospective, observational, longitudinal. SETTING Tertiary referral center. PATIENTS Thirty-nine patients who underwent 44 primary or revision stapes surgeries. INTERVENTION Hearing thresholds were measured at 0.25 to 16 kHz preoperatively, and at approximately 1 week, 1, 3, 6, and 12 months postoperatively. MAIN OUTCOME MEASURES Average threshold changes in bands of frequencies (0.25-1, 2-8, 9-11.2, 12.5-16 kHz) and the percentage of patients with a change in the highest frequency at which a hearing threshold could be measured were evaluated at each assessment. RESULTS A mean hearing loss was documented in the EHF range at all postoperative assessments. There was a decrease in the highest frequency at which a hearing threshold was measureable in 77% of patients at the first postoperative assessment, and despite some improvement over time, in 50% of patients 12 months postoperatively. CONCLUSION There is a significant incidence of EHF loss after stapedectomy. Although partial recovery often occurs, more than half of patients retain an EHF hearing loss 12 months postoperatively. As hearing loss in the EHF range is more common than loss at 4 kHz, EHF measurements may be a more sensitive model to compare surgical factors and evaluate pharmacologic interventions.
Collapse
|
42
|
Effects of Modified Hearing Aid Fittings on Loudness and Tone Quality for Different Acoustic Scenes. Ear Hear 2018; 37:483-91. [PMID: 26928003 DOI: 10.1097/aud.0000000000000285] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To compare loudness and tone-quality ratings for sounds processed via a simulated five-channel compression hearing aid fitted using NAL-NL2 or using a modification of the fitting designed to be appropriate for the type of listening situation: speech in quiet, speech in noise, music, and noise alone. DESIGN Ratings of loudness and tone quality were obtained for stimuli presented via a loudspeaker in front of the participant. For normal-hearing participants, levels of 50, 65, and 80 dB SPL were used. For hearing-impaired participants, the stimuli were processed via a simulated hearing aid with five-channel fast-acting compression fitted using NAL-NL2 or using a modified fitting. Input levels to the simulated hearing aid were 50, 65, and 80 dB SPL. All participants listened with one ear plugged. For speech in quiet, the modified fitting was based on the CAM2B method. For speech in noise, the modified fitting used slightly (0 to 2 dB) decreased gains at low frequencies. For music, the modified fitting used increased gains (by 5 to 14 dB) at low frequencies. For noise alone, the modified fitting used decreased gains at all frequencies (by a mean of 1 dB at low frequencies increasing to 8 dB at high frequencies). RESULTS For speech in quiet, ratings of loudness with the NAL-NL2 fitting were slightly lower than the mean ratings for normal-hearing participants for all levels, while ratings with CAM2B were close to normal for the two lower levels, and slightly greater than normal for the highest level. Ratings of tone quality were close to the optimum value ("just right") for both fittings, except that the CAM2B fitting was rated as very slightly boomy for the 80-dB SPL level. For speech in noise, the ratings of loudness were very close to the normal values and the ratings of tone quality were close to the optimal value for both fittings and for all levels. For music, the ratings of loudness were close to the normal values for NAL-NL2 and slightly above normal for the modified fitting. The tone quality was rated as very slightly tinny for NAL-NL2 and very slightly boomy for the modified fitting. For noise alone, the NAL-NL2 fitting was rated as slightly louder than normal for all levels, while the modified fitting was rated as close to normal. Tone quality was rated as slightly sharper for the NAL-NL2 fitting than for the modified fitting. CONCLUSIONS Loudness and tone quality can sometimes be made slightly closer to "normal" by modifying gains for different listening situations. The modification for music required to achieve "normal" tone quality appears to be less than used in this study.
Collapse
|
43
|
Jesteadt W, Walker SM, Ogun OA, Ohlrich B, Brunette KE, Wróblewski M, Schmid KK. Relative contributions of specific frequency bands to the loudness of broadband sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1597. [PMID: 28964048 PMCID: PMC5612800 DOI: 10.1121/1.5003778] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Listeners with normal hearing (NH) and sensorineural hearing loss (SNHL) were asked to compare pairs of noise stimuli and choose the louder noise in each pair. Each noise was made up of 15, two-ERBN (equivalent rectangular bandwidth) wide frequency bands that varied independently over a 12-dB range from one presentation to the next. Mean levels of the bands followed the long-term average speech spectrum (LTASS) or were set to 43, 51, or 59 dB sound pressure level (SPL). The relative contribution of each band to the total loudness of the noise was determined by computing the correlation between the difference in levels for a given band on every trial and the listener's decision on that trial. Weights for SNHL listeners were governed by audibility and the spectrum of the noise stimuli, with bands near the spectral peak of the LTASS noise receiving greatest weight. NH listeners assigned greater weight to the lowest and highest bands, an effect that increased with overall level, but did not assign greater weight to bands near the LTASS peak. Additional loudness-matching and paired-comparison studies using stimuli missing one of the 15 bands showed a significant contribution by the highest band, but properties other than loudness may have contributed to the decisions.
Collapse
Affiliation(s)
- Walt Jesteadt
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Sara M Walker
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Oluwaseye A Ogun
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Brenda Ohlrich
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Katyarina E Brunette
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Marcin Wróblewski
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Kendra K Schmid
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha, Nebraska 68198, USA
| |
Collapse
|
44
|
Salorio-Corbetto M, Baer T, Moore BCJ. Quality ratings of frequency-compressed speech by participants with extensive high-frequency dead regions in the cochlea. Int J Audiol 2017; 56:106-120. [PMID: 27724057 PMCID: PMC5283379 DOI: 10.1080/14992027.2016.1234071] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2015] [Revised: 02/19/2016] [Accepted: 08/30/2016] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The objective was to assess the degradation of speech sound quality produced by frequency compression for listeners with extensive high-frequency dead regions (DRs). DESIGN Quality ratings were obtained using values of the starting frequency (Sf) of the frequency compression both below and above the estimated edge frequency, fe, of each DR. Thus, the value of Sf often fell below the lowest value currently used in clinical practice. Several compression ratios were used for each value of Sf. Stimuli were sentences processed via a prototype hearing aid based on Phonak Exélia Art P. STUDY SAMPLE Five participants (eight ears) with extensive high-frequency DRs were tested. RESULTS Reductions of sound-quality produced by frequency compression were small to moderate. Ratings decreased significantly with decreasing Sf and increasing CR. The mean ratings were lowest for the lowest Sf and highest CR. Ratings varied across participants, with one participant rating frequency compression lower than no frequency compression even when Sf was above fe. CONCLUSIONS Frequency compression degraded sound quality somewhat for this small group of participants with extensive high-frequency DRs. The degradation was greater for lower values of Sf relative to fe, and for greater values of CR. Results varied across participants.
Collapse
Affiliation(s)
| | - Thomas Baer
- Department of Experimental Psychology, University of Cambridge,
Cambridge,
UK
| | - Brian C. J. Moore
- Department of Experimental Psychology, University of Cambridge,
Cambridge,
UK
| |
Collapse
|
45
|
Salorio-Corbetto M, Baer T, Moore BCJ. Evaluation of a Frequency-Lowering Algorithm for Adults With High-Frequency Hearing Loss. Trends Hear 2017; 21:2331216517734455. [PMID: 29027511 PMCID: PMC5642012 DOI: 10.1177/2331216517734455] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 08/31/2017] [Accepted: 09/02/2017] [Indexed: 11/15/2022] Open
Abstract
The objective was to determine the effects of a frequency-lowering algorithm (frequency composition, Fcomp) on consonant identification, word-final /s, z/ detection, the intelligibility of sentences in noise, and subjective benefit, for people with high-frequency hearing loss, including people with dead regions (DRs) in the cochlea. A single-blind randomized crossover design was used. Performance with Bernafon Acriva 9 hearing aids was compared with Fcomp off and Fcomp on. Participants wore the hearing aids in each condition in a counterbalanced order. Data were collected after at least 8 weeks of experience with a condition. Outcome measures were audibility, scores from the speech perception tests, and scores from a questionnaire comparing self-perceived hearing ability with Fcomp off and Fcomp on. Ten adults with mild to severe high-frequency hearing loss (seven with extensive DRs, one with patchy or restricted DRs, and two with no DR) were tested. Fcomp improved the audibility of high-frequency sounds for 6 out of 10 participants. There was no overall effect of Fcomp on consonant identification, but the pattern of consonant confusions varied across conditions and participants. For word-final /s, z/ detection, performance was significantly better with Fcomp on than with Fcomp off. Questionnaire scores showed no differences between conditions. In summary, Fcomp improved word-final /s, z/ detection. No benefit was found for the other measures.
Collapse
Affiliation(s)
| | - Thomas Baer
- Department of Experimental Psychology, University of Cambridge, UK
| | | |
Collapse
|
46
|
Schlittenlacher J, Moore BCJ. Discrimination of amplitude-modulation depth by subjects with normal and impaired hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3487. [PMID: 27908066 DOI: 10.1121/1.4966117] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The loudness recruitment associated with cochlear hearing loss increases the perceived amount of amplitude modulation (AM), called "fluctuation strength." For normal-hearing (NH) subjects, fluctuation strength "saturates" when the AM depth is high. If such saturation occurs for hearing-impaired (HI) subjects, they may show poorer AM depth discrimination than NH subjects when the reference AM depth is high. To test this hypothesis, AM depth discrimination of a 4-kHz sinusoidal carrier, modulated at a rate of 4 or 16 Hz, was measured in a two-alternative forced-choice task for reference modulation depths, mref, of 0.5, 0.6, and 0.7. AM detection was assessed using mref = 0. Ten older HI subjects, and five young and five older NH subjects were tested. Psychometric functions were measured using five target modulation depths for each mref. For AM depth discrimination, the HI subjects performed more poorly than the NH subjects, both at 30 dB sensation level (SL) and 75 dB sound pressure level (SPL). However, for AM detection, the HI subjects performed better than the NH subjects at 30 dB SL; there was no significant difference between the HI and NH groups at 75 dB SPL. The results for the NH subjects were not affected by age.
Collapse
Affiliation(s)
- Josef Schlittenlacher
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England
| | - Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England
| |
Collapse
|
47
|
Moore BCJ, Sęk A. Preferred Compression Speed for Speech and Music and Its Relationship to Sensitivity to Temporal Fine Structure. Trends Hear 2016; 20:2331216516640486. [PMID: 27604778 PMCID: PMC5017572 DOI: 10.1177/2331216516640486] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Revised: 03/02/2016] [Accepted: 03/02/2016] [Indexed: 11/30/2022] Open
Abstract
Multichannel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore (2008) suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Twenty-two hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2A method. On each trial, a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2000 Hz and by two measures of sensitivity to interaural phase at low frequencies. The results for the difference limens for frequency, but not the measures of sensitivity to interaural phase, supported the suggestion that preference for compression speed is affected by sensitivity to TFS.
Collapse
Affiliation(s)
- Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, UK
| | - Aleksander Sęk
- Department of Experimental Psychology, University of Cambridge, UK Institute of Acoustics, Adam Mickiewicz University, Poznan, Poland
| |
Collapse
|
48
|
Lőcsei G, Pedersen JH, Laugesen S, Santurette S, Dau T, MacDonald EN. Temporal Fine-Structure Coding and Lateralized Speech Perception in Normal-Hearing and Hearing-Impaired Listeners. Trends Hear 2016; 20:2331216516660962. [PMID: 27601071 PMCID: PMC5014088 DOI: 10.1177/2331216516660962] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 07/01/2016] [Indexed: 11/16/2022] Open
Abstract
This study investigated the relationship between speech perception performance in spatially complex, lateralized listening scenarios and temporal fine-structure (TFS) coding at low frequencies. Young normal-hearing (NH) and two groups of elderly hearing-impaired (HI) listeners with mild or moderate hearing loss above 1.5 kHz participated in the study. Speech reception thresholds (SRTs) were estimated in the presence of either speech-shaped noise, two-, four-, or eight-talker babble played reversed, or a nonreversed two-talker masker. Target audibility was ensured by applying individualized linear gains to the stimuli, which were presented over headphones. The target and masker streams were lateralized to the same or to opposite sides of the head by introducing 0.7-ms interaural time differences between the ears. TFS coding was assessed by measuring frequency discrimination thresholds and interaural phase difference thresholds at 250 Hz. NH listeners had clearly better SRTs than the HI listeners. However, when maskers were spatially separated from the target, the amount of SRT benefit due to binaural unmasking differed only slightly between the groups. Neither the frequency discrimination threshold nor the interaural phase difference threshold tasks showed a correlation with the SRTs or with the amount of masking release due to binaural unmasking, respectively. The results suggest that, although HI listeners with normal hearing thresholds below 1.5 kHz experienced difficulties with speech understanding in spatially complex environments, these limitations were unrelated to TFS coding abilities and were only weakly associated with a reduction in binaural-unmasking benefit for spatially separated competing sources.
Collapse
Affiliation(s)
- Gusztáv Lőcsei
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | | | - Søren Laugesen
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Sébastien Santurette
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Torsten Dau
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Ewen N MacDonald
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
49
|
Extended High-Frequency Bandwidth Improves Speech Reception in the Presence of Spatially Separated Masking Speech. Ear Hear 2016; 36:e214-24. [PMID: 25856543 DOI: 10.1097/aud.0000000000000161] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The hypothesis that extending the audible frequency bandwidth beyond the range currently implemented in most hearing aids can improve speech understanding was tested for normal-hearing and hearing-impaired participants using target sentences and spatially separated masking speech. DESIGN The Hearing In Speech Test (HIST) speech corpus was re-recorded, and four masking talkers were recorded at a sample rate of 44.1 kHz. All talkers were male native speakers of American English. For each subject, the reception threshold for sentences (RTS) was measured in two spatial configurations. In the asymmetric configuration, the target was presented from -45° azimuth and two colocated masking talkers were presented from +45° azimuth. In the diffuse configuration, the target was presented from 0° azimuth and four masking talkers were each presented from a different azimuth: +45°, +135°, -135°, and -45°. The new speech sentences, masking materials, and configurations were presented using low-pass filter cutoff frequencies of 4, 6, 8, and 10 kHz. For the normal-hearing participants, stimuli were presented in the sound field using loudspeakers. For the hearing-impaired participants, the spatial configurations were simulated using earphones, and a multiband wide-dynamic-range compressor with a modified CAM2 fitting algorithm was used to compensate for each participant's hearing loss. RESULTS For the normal-hearing participants (N = 24, mean age 40 years), the RTS improved significantly by 3.0 dB when the bandwidth was increased from 4 to 10 kHz, and a significant improvement of 1.3 dB was obtained from extending the bandwidth from 6 to 10 kHz, in both spatial configurations. Hearing-impaired participants (N = 25, mean age 71 years) also showed a significant improvement in RTS with extended bandwidth, but the effect was smaller than for the normal-hearing participants. The mean decrease in RTS when the bandwidth was increased from 4 to 10 kHz was 1.3 dB for the asymmetric condition and 0.5 dB for the diffuse condition. CONCLUSIONS Extending bandwidth from 4 to 10 kHz can improve the ability of normal-hearing and hearing-impaired participants to understand target speech in the presence of spatially separated masking speech. Future studies of the benefits of extended high-frequency amplification should investigate other realistic listening situations, masker types, spatial configurations, and room reverberation conditions, to determine added value in overcoming the technical challenges associated with implementing a device capable of providing extended high-frequency amplification.
Collapse
|
50
|
Alexander JM. Nonlinear frequency compression: Influence of start frequency and input bandwidth on consonant and vowel recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:938-57. [PMID: 26936574 PMCID: PMC4769266 DOI: 10.1121/1.4941916] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2015] [Revised: 01/26/2016] [Accepted: 02/01/2016] [Indexed: 05/28/2023]
Abstract
By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%-50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens.
Collapse
Affiliation(s)
- Joshua M Alexander
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| |
Collapse
|