1
|
Ayas M, Muzaffar J, Phillips V, Bance M. Comparison of behind-the-ear vs. off-the-ear speech processors in cochlear implants: A systematic review and narrative synthesis. PLoS One 2025; 20:e0318218. [PMID: 39869588 PMCID: PMC11771919 DOI: 10.1371/journal.pone.0318218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 01/10/2025] [Indexed: 01/29/2025] Open
Abstract
BACKGROUND Cochlear implants (CI) with off-the-ear (OTE) and behind-the-ear (BTE) speech processors differ in user experience and audiological performance, impacting speech perception, comfort, and satisfaction. OBJECTIVES This systematic review explores audiological outcomes (speech perception in quiet and noise) and non-audiological factors (device handling, comfort, cosmetics, overall satisfaction) of OTE and BTE speech processors in CI recipients. METHODS We conducted a systematic review following PRISMA-S guidelines, examining Medline, Embase, Cochrane Library, Scopus, and ProQuest Dissertations and Theses. Data encompassed recipient characteristics, processor usage, speech perception, and non-audiological factors. Studies were assessed for quality and risk of bias by using Newcastle-Ottawa Scale (NOS). RESULTS Nine studies involving 204 CI recipients, with a mean age of 49.01 years and 6.62 years of processor use, were included. Audiological results indicated comparable performance in quiet environments, with a slight preference for OTE in noisy conditions. For non-audiological factors, OTE processors excelled in comfort, handling, and aesthetics, leading to higher satisfaction. More data on medical complications and long-term implications is needed. CONCLUSION OTE processors may offer comparable performance to BTE processors in certain conditions, though not universally across all audiological outcomes. Interpretation depends on settings, processor generation, and testing paradigms. However, non-audiological factors might favour OTE. Understanding current literature may guide professionals in selecting suitable processors for CI recipients.
Collapse
Affiliation(s)
- Muhammed Ayas
- College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
- Cambridge Hearing Group, University of Cambridge, Cambridge, United Kingdom
- Research Institute of Medical and Health Sciences (RIMHS), University of Sharjah, Sharjah, United Arab Emirates
| | - Jameel Muzaffar
- Cambridge Hearing Group, University of Cambridge, Cambridge, United Kingdom
- Department of ENT, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- Department of Applied Health Sciences, University of Birmingham, Birmingham, United Kingdom
| | - Veronica Phillips
- Medical Library, University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - Manohar Bance
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
- Cambridge Hearing Group, University of Cambridge, Cambridge, United Kingdom
- Department of ENT, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| |
Collapse
|
2
|
Goverts ST, Best V, Bouwmeester J, Smits C, Colburn HS. Acoustic Realism of Clinical Speech-in-Noise Testing: Parameter Ranges of Speech-Likeness, Interaural Coherence, and Interaural Differences. Trends Hear 2025; 29:23312165251336625. [PMID: 40329585 PMCID: PMC12059433 DOI: 10.1177/23312165251336625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 03/26/2025] [Accepted: 04/05/2025] [Indexed: 05/08/2025] Open
Abstract
Speech-in-noise testing is a valuable component of audiological examination that can provide estimates of a listener's ability to communicate in their everyday life. It has long been recognized, however, that the acoustics of real-world environments are complex and variable and not well represented by a typical clinical test setup. The first aim of this study was to quantify real-world environments in terms of several acoustic parameters that may be relevant for speech understanding (namely speech-likeness, interaural coherence, and interaural time and level differences). Earlier acoustic analyses of binaural recordings in natural environments were extended to binaural re-creations of natural environments that included conversational speech embedded in recorded backgrounds and allowed a systematic manipulation of signal-to-noise ratio. The second aim of the study was to examine these same parameters in typical clinical speech-in-noise tests and consider the "acoustic realism" of such tests. We confirmed that the parameter spaces of natural environments are poorly covered by those of the most commonly used clinical test with one frontal loudspeaker. We also demonstrated that a simple variation of the clinical test, which uses two spatially separated loudspeakers to present speech and noise, leads to better coverage of the parameter spaces of natural environments. Overall, the results provide a framework for characterizing different listening environments that may guide future efforts to increase the real-world relevance of clinical speech-in-noise testing.
Collapse
Affiliation(s)
- S Theo Goverts
- Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam UMC Location Vrije Universiteit, Amsterdam, the Netherlands
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, USA
| | - Julia Bouwmeester
- Department of Otolaryngology-Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam UMC Location Vrije Universiteit, Amsterdam, the Netherlands
| | - Cas Smits
- Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam UMC Location University of Amsterdam, Amsterdam, the Netherlands
| | - H Steven Colburn
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| |
Collapse
|
3
|
Parmar BJ, Salorio-Corbetto M, Picinali L, Mahon M, Nightingale R, Somerset S, Cullington H, Driver S, Rocca C, Jiang D, Vickers D. Virtual reality games for spatial hearing training in children and young people with bilateral cochlear implants: the "Both Ears (BEARS)" approach. Front Neurosci 2024; 18:1491954. [PMID: 39697774 PMCID: PMC11653081 DOI: 10.3389/fnins.2024.1491954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 10/23/2024] [Indexed: 12/20/2024] Open
Abstract
Spatial hearing relies on the encoding of perceptual sound location cues in space. It is critical for communicating in background noise, and understanding where sounds are coming from (sound localization). Although there are some monoaural spatial hearing cues (i.e., from one ear), most of our spatial hearing skills require binaural hearing (i.e., from two ears). Cochlear implants (CIs) are often the most appropriate rehabilitation for individuals with severe-to-profound hearing loss, with those aged 18 years of age and younger typically receiving bilateral implants (one in each ear). As experience with bilateral hearing increases, individuals tend to improve their spatial hearing skills. Extensive research demonstrates that training can enhance sound localization, speech understanding in noise, and music perception. The BEARS (Both Ears) approach utilizes Virtual Reality (VR) games specifically designed for young people with bilateral CIs to train and improve spatial hearing skills. This paper outlines the BEARS approach by: (i) emphasizing the need for more robust and engaging rehabilitation techniques, (ii) presenting the BEARS logic model that underpins the intervention, and (iii) detailing the assessment tools that will be employed in a clinical trial to evaluate the effectiveness of BEARS in alignment with the logic model.
Collapse
Affiliation(s)
- Bhavisha J. Parmar
- SOUND Lab, Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
- Ear Institute, University College London, London, United Kingdom
| | - Marina Salorio-Corbetto
- SOUND Lab, Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Faculty of Engineering, Imperial College London, London, United Kingdom
| | - Merle Mahon
- Division of Psychology and Language Sciences, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Ruth Nightingale
- Division of Psychology and Language Sciences, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Sarah Somerset
- School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham Hearing Biomedical Research Centre, University of Nottingham, Nottingham, United Kingdom
| | - Helen Cullington
- Auditory Implant Service University of Southampton, Southampton, United Kingdom
| | - Sandra Driver
- St Thomas' Hearing Implant Centre, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Christine Rocca
- St Thomas' Hearing Implant Centre, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Dan Jiang
- St Thomas' Hearing Implant Centre, Guy's and St Thomas' NHS Foundation Trust, London, United Kingdom
- Centre for Craniofacial and Regenerative Biology, Faculty of Dentistry, Oral & Craniofacial Sciences, King's College London, London, United Kingdom
| | - Deborah Vickers
- SOUND Lab, Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
4
|
Wagner TM, Wagner L, Plontke SK, Rahne T. Enhancing Cochlear Implant Outcomes across Age Groups: The Interplay of Forward Focus and Advanced Combination Encoder Coding Strategies in Noisy Conditions. J Clin Med 2024; 13:1399. [PMID: 38592239 PMCID: PMC10931918 DOI: 10.3390/jcm13051399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/10/2024] Open
Abstract
Background: Hearing in noise is challenging for cochlear implant users and requires significant listening effort. This study investigated the influence of ForwardFocus and number of maxima of the Advanced Combination Encoder (ACE) strategy, as well as age, on speech recognition threshold and listening effort in noise. Methods: A total of 33 cochlear implant recipients were included (age ≤ 40 years: n = 15, >40 years: n = 18). The Oldenburg Sentence Test was used to measure 50% speech recognition thresholds (SRT50) in fluctuating and stationary noise. Speech was presented frontally, while three frontal or rear noise sources were used, and the number of ACE maxima varied between 8 and 12. Results: ForwardFocus significantly improved the SRT50 when noise was presented from the back, independent of subject age. The use of 12 maxima further improved the SRT50 when ForwardFocus was activated and when noise and speech were presented frontally. Listening effort was significantly worse in the older age group compared to the younger age group and was reduced by ForwardFocus but not by increasing the number of ACE maxima. Conclusion: Forward Focus can improve speech recognition in noisy environments and reduce listening effort, especially in older cochlear implant users.
Collapse
Affiliation(s)
- Telse M. Wagner
- Department of Otorhinolaryngology, University Medicine Halle, Ernst-Grube-Straße 40, 06120 Halle (Saale), Germany; (L.W.); (S.K.P.); (T.R.)
| | | | | | | |
Collapse
|
5
|
Miles K, Best V, Buchholz JM. Feasibility of an Adaptive Version of the Everyday Conversational Sentences in Noise Test. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:680-687. [PMID: 38324271 PMCID: PMC11000810 DOI: 10.1044/2023_jslhr-23-00507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/08/2023] [Accepted: 11/18/2023] [Indexed: 02/08/2024]
Abstract
PURPOSE To investigate potential reasons for the mismatch between laboratory/clinic-based sentence-in-noise performance and real-world listening abilities, we recently developed a corpus of natural, spontaneously spoken speech with three vocal effort levels (Everyday Conversational Sentences in Noise [ECO-SiN]). Here, we examined the feasibility of using the ECO-SiN corpus for adaptive speech-in-noise testing, which might be a desirable format in certain situations (e.g., during a clinical visit). METHOD Ten young, normal-hearing adults, along with 20 older adults with hearing loss participated in the study. Speech reception thresholds (SRTs) were obtained using ECO-SiN sentences, which were systematically compared to the SRTs obtained using traditional Bamford-Kowal-Bench-like sentences. RESULTS Results demonstrated the properties of the test compared favorably with those of a standard test based on scripted and clearly spoken sentences. Moreover, whereas normal-hearing listeners received a benefit from an increase in vocal effort, the participants with hearing loss showed a disbenefit that increased with increasing hearing loss. CONCLUSION The adaptive version of the ECO-SiN test is feasible for research and clinical testing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25146338.
Collapse
Affiliation(s)
- Kelly Miles
- ECHO Laboratory, Macquarie University, Sydney, New South Wales, Australia
| | - Virginia Best
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | - Jörg M. Buchholz
- ECHO Laboratory, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Jorgensen E, Wu YH. Effects of entropy in real-world noise on speech perception in listeners with normal hearing and hearing lossa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3627-3643. [PMID: 38051522 PMCID: PMC10699887 DOI: 10.1121/10.0022577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/07/2023] [Accepted: 11/10/2023] [Indexed: 12/07/2023]
Abstract
Hearing aids show more benefit in traditional laboratory speech-in-noise tests than in real-world noisy environments. Real-world noise comprises a large range of acoustic properties that vary randomly and rapidly between and within environments, making quantifying real-world noise and using it in experiments and clinical tests challenging. One approach is to use acoustic features and statistics to quantify acoustic properties of real-world noise and control for them or measure their relationship to listening performance. In this study, the complexity of real-world noise from different environments was quantified using entropy in both the time- and frequency-domains. A distribution of noise segments from low to high entropy were extracted. Using a trial-by-trial design, listeners with normal hearing and hearing loss (in aided and unaided conditions) repeated back sentences embedded in these noise segments. Entropy significantly affected speech perception, with a larger effect of entropy in the time-domain than the frequency-domain, a larger effect for listeners with normal hearing than for listeners with hearing loss, and a larger effect for listeners with hearing loss in the aided than unaided condition. Speech perception also differed between most environment types. Combining entropy with the environment type improved predictions of speech perception above the environment type alone.
Collapse
Affiliation(s)
- Erik Jorgensen
- Department of Communication Sciences and Disorders University of Wisconsin-Madison, Madison, Wisconsin 53706, USA
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| |
Collapse
|
7
|
Ruiz Callejo D, Boets B. A systematic review on speech-in-noise perception in autism. Neurosci Biobehav Rev 2023; 154:105406. [PMID: 37797728 DOI: 10.1016/j.neubiorev.2023.105406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 10/07/2023]
Abstract
Individuals with autism spectrum disorder (ASD) exhibit atypical speech-in-noise (SiN) perception, but the scope of these impairments has not been clearly defined. We conducted a systematic review of the behavioural research on SiN perception in ASD, using a comprehensive search strategy across databases (Embase, Pubmed, Web of Science, APA PsycArticles, LLBA, clinicaltrials.gov and PsyArXiv). We withheld 20 studies that generally revealed intact speech perception in stationary noise, while impairments in speech discrimination were found in temporally modulated noise, concurrent speech, and audiovisual speech perception. An association with auditory temporal processing deficits, exacerbated by suboptimal language skills, is shown. Speech-in-speech perception might be further impaired due to deficient top-down processing of speech. Further research is needed to address remaining challenges and gaps in our understanding of these impairments, including the developmental aspects of SiN processing in ASD, and the impact of gender and social attentional orienting on this ability. Our findings have important implications for improving communication in ASD, both in daily interactions and in clinical and educational settings.
Collapse
Affiliation(s)
- Diego Ruiz Callejo
- University Psychiatric Center KU Leuven, Leuven, Belgium; Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium.
| | - Bart Boets
- University Psychiatric Center KU Leuven, Leuven, Belgium; Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Autism Research (LauRes), KU Leuven, Leuven, Belgium; Leuven Brain Institute (LBI), KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Hey M, Mewes A, Hocke T. Speech comprehension in noise-considerations for ecologically valid assessment of communication skills ability with cochlear implants. HNO 2023; 71:26-34. [PMID: 36480047 PMCID: PMC10409840 DOI: 10.1007/s00106-022-01232-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/10/2022] [Indexed: 12/13/2022]
Abstract
BACKGROUND Nowadays, cochlear implant (CI) patients mostly show good to very good speech comprehension in quiet, but there are known problems with communication in everyday noisy situations. There is thus a need for ecologically valid measurements of speech comprehension in real-life listening situations for hearing-impaired patients. The additional methodological effort must be balanced with clinical human and spatial resources. This study investigates possible simplifications of a complex measurement setup. METHODS The study included 20 adults from long-term follow-up after CI fitting with postlingual onset of hearing impairment. The complexity of the investigated listening situations was influenced by changing the spatiality of the noise sources and the temporal characteristics of the noise. To compare different measurement setups, speech reception thresholds (SRT) were measured unilaterally with different CI processors and settings. Ten normal-hearing subjects served as reference. RESULTS In a complex listening situation with four loudspeakers, differences in SRT from CI subjects to the control group of up to 8 dB were found. For CI subjects, this SRT correlated with the situation with frontal speech signal and fluctuating interference signal from the side with R2 = 0.69. For conditions with stationary interfering signals, R2 values <0.2 were found. CONCLUSION There is no universal solution for all audiometric questions with respect to the spatiality and temporal characteristics of noise sources. In the investigated context, simplification of the complex spatial audiometric setting while using fluctuating competing signals was possible.
Collapse
Affiliation(s)
- Matthias Hey
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Germany.
| | - Alexander Mewes
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Germany
| | | |
Collapse
|
9
|
Hey M, Mewes A, Hocke T. [Speech comprehension in noise-considerations for ecologically valid assessment of communication skills ability with cochlear implants. German version]. HNO 2022; 70:861-869. [PMID: 36301326 PMCID: PMC9691490 DOI: 10.1007/s00106-022-01234-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND Nowadays, cochlear implant (CI) patients mostly show good to very good speech comprehension in quiet, but there are known problems with communication in everyday noisy situations. There is thus a need for ecologically valid measurements of speech comprehension in real-life listening situations for hearing-impaired patients. The additional methodological effort must be balanced with clinical human and spatial resources. This study investigates possible simplifications of a complex measurement setup. METHODS The study included 20 adults from long-term follow-up after CI fitting with postlingual onset of hearing impairment. The complexity of the investigated listening situations was influenced by changing the spatiality of the noise sources and the temporal characteristics of the noise. To compare different measurement setups, speech reception thresholds (SRT) were measured unilaterally with different CI processors and settings. Ten normal-hearing subjects served as reference. RESULTS In a complex listening situation with four loudspeakers, differences in SRT from CI subjects to the control group of up to 8 dB were found. For CI subjects, this SRT correlated with the situation with frontal speech signal and fluctuating interference signal from the side with R2 = 0.69. For conditions with stationary interfering signals, R2 values <0.2 were found. CONCLUSION There is no universal solution for all audiometric questions with respect to the spatiality and temporal characteristics of noise sources. In the investigated context, simplification of the complex spatial audiometric setting while using fluctuating competing signals was possible.
Collapse
Affiliation(s)
- Matthias Hey
- Klinik für Hals-Nasen-Ohren-Heilkunde, Kopf- und Halschirurgie; Audiologie, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Deutschland.
| | - Alexander Mewes
- Klinik für Hals-Nasen-Ohren-Heilkunde, Kopf- und Halschirurgie; Audiologie, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Deutschland
| | | |
Collapse
|
10
|
Hey M, Hersbach AA, Hocke T, Mauger SJ, Böhnke B, Mewes A. Ecological Momentary Assessment to Obtain Signal Processing Technology Preference in Cochlear Implant Users. J Clin Med 2022; 11:jcm11102941. [PMID: 35629065 PMCID: PMC9147494 DOI: 10.3390/jcm11102941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 05/13/2022] [Accepted: 05/20/2022] [Indexed: 02/01/2023] Open
Abstract
Background: To assess the performance of cochlear implant users, speech comprehension benefits are generally measured in controlled sound room environments of the laboratory. For field-based assessment of preference, questionnaires are generally used. Since questionnaires are typically administered at the end of an experimental period, they can be inaccurate due to retrospective recall. An alternative known as ecological momentary assessment (EMA) has begun to be used for clinical research. The objective of this study was to determine the feasibility of using EMA to obtain in-the-moment responses from cochlear implant users describing their technology preference in specific acoustic listening situations. Methods: Over a two-week period, eleven adult cochlear implant users compared two listening programs containing different sound processing technologies during everyday take-home use. Their task was to compare and vote for their preferred program. Results: A total of 205 votes were collected from acoustic environments that were classified into six listening scenes. The analysis yielded different patterns of voting among the subjects. Two subjects had a consistent preference for one sound processing technology across all acoustic scenes, three subjects changed their preference based on the acoustic scene, and six subjects had no conclusive preference for either technology. Conclusion: Results show that EMA is suitable for quantifying real-world self-reported preference, showing inter-subject variability in different listening environments. However, there is uncertainty that patients will not provide sufficient spontaneous feedback. One improvement for future research is a participant forced prompt to improve response rates.
Collapse
Affiliation(s)
- Matthias Hey
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
- Correspondence: ; Tel.: +49-431-500-21857
| | - Adam A. Hersbach
- Research and Development, Cochlear Limited, Melbourne, VIC 3000, Australia;
| | - Thomas Hocke
- Research, Cochlear Deutschland, 30625 Hannover, Germany;
| | | | - Britta Böhnke
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
| | - Alexander Mewes
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
| |
Collapse
|
11
|
Miles K, Beechey T, Best V, Buchholz J. Measuring Speech Intelligibility and Hearing-Aid Benefit Using Everyday Conversational Sentences in Real-World Environments. Front Neurosci 2022; 16:789565. [PMID: 35368279 PMCID: PMC8970270 DOI: 10.3389/fnins.2022.789565] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Laboratory and clinical-based assessments of speech intelligibility must evolve to better predict real-world speech intelligibility. One way of approaching this goal is to develop speech intelligibility tasks that are more representative of everyday speech communication outside the laboratory. Here, we evaluate speech intelligibility using both a standard sentence recall task based on clear, read speech (BKB sentences), and a sentence recall task consisting of spontaneously produced speech excised from conversations which took place in realistic background noises (ECO-SiN sentences). The sentences were embedded at natural speaking levels in six realistic background noises that differed in their overall level, which resulted in a range of fixed signal-to-noise ratios. Ten young, normal hearing participants took part in the study, along with 20 older participants with a range of levels of hearing loss who were tested with and without hearing-aid amplification. We found that scores were driven by hearing loss and the characteristics of the background noise, as expected, but also strongly by the speech materials. Scores obtained with the more realistic sentences were generally lower than those obtained with the standard sentences, which reduced ceiling effects for the majority of environments/listeners (but introduced floor effects in some cases). Because ceiling and floor effects limit the potential for observing changes in performance, benefits of amplification were highly dependent on the speech materials for a given background noise and participant group. Overall, the more realistic speech task offered a better dynamic range for capturing individual performance and hearing-aid benefit across the range of real-world environments we examined.
Collapse
Affiliation(s)
- Kelly Miles
- ECHO Laboratory, Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Timothy Beechey
- Hearing Sciences – Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Jörg Buchholz
- ECHO Laboratory, Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|