51
|
Parmar BJ, Rajasingam SL, Bizley JK, Vickers DA. Factors Affecting the Use of Speech Testing in Adult Audiology. Am J Audiol 2022; 31:528-540. [PMID: 35737980 PMCID: PMC7613483 DOI: 10.1044/2022_aja-21-00233] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 01/04/2022] [Accepted: 04/05/2022] [Indexed: 02/02/2023] Open
Abstract
OBJECTIVE The aim of this study was to evaluate hearing health care professionals' (HHPs) speech testing practices in routine adult audiology services and better understand the facilitators and barriers to speech testing provision. DESIGN A cross-sectional questionnaire study was conducted. STUDY SAMPLE A sample (N = 306) of HHPs from the public (64%) and private (36%) sectors in the United Kingdom completed the survey. RESULTS In the United Kingdom, speech testing practice varied significantly between health sectors. Speech testing was carried out during the audiology assessment by 73.4% of private sector HHPs and 20.4% of those from the public sector. During the hearing aid intervention stage, speech testing was carried out by 56.5% and 26.5% of HHPs from the private and public sectors, respectively. Recognized benefits of speech testing included (a) providing patients with relatable assessment information, (b) guiding hearing aid fitting, and (c) supporting a diagnostic test battery. A lack of clinical time was a key barrier to uptake. CONCLUSIONS Use of speech testing varies in adult audiology. Results from this study found that the percentage of U.K. HHPs making use of speech tests was low compared to that of other countries. HHPs recognized different benefits of speech testing in audiology practice, but the barriers limiting uptake were often driven by factors derived from decision makers rather than clinical rationale. Privately funded HHPs used speech tests more frequently than those working in the public sector where time and resources are under greater pressure and governed by guidance that does not include a recommendation for speech testing. Therefore, the inclusion of speech testing in national clinical guidelines could increase the consistency of use and facilitate the comparison of practice trends across centers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20044457.
Collapse
Affiliation(s)
- Bhavisha J. Parmar
- UCL Ear Institute, University College London, United Kingdom
- Sound Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, United Kingdom
| | | | | | - Deborah A. Vickers
- Sound Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, United Kingdom
| |
Collapse
|
52
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
53
|
Comment on the Point of View “Ecological Validity, External Validity and Mundane Realism in Hearing Science”. Ear Hear 2022; 43:1601-1602. [DOI: 10.1097/aud.0000000000001241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
54
|
Beechey T. Is speech intelligibility what speech intelligibility tests test? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1573. [PMID: 36182275 DOI: 10.1121/10.0013896] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 08/17/2022] [Indexed: 06/16/2023]
Abstract
Natural, conversational speech signals contain sources of symbolic and iconic information, both of which are necessary for the full understanding of speech. But speech intelligibility tests, which are generally derived from written language, present only symbolic information sources, including lexical semantics and syntactic structures. Speech intelligibility tests exclude almost all sources of information about talkers, including their communicative intentions and their cognitive states and processes. There is no reason to suspect that either hearing impairment or noise selectively affect perception of only symbolic information. We must therefore conclude that diagnosis of good or poor speech intelligibility on the basis of standard speech tests is based on measurement of only a fraction of the task of speech perception. This paper presents a descriptive comparison of information sources present in three widely used speech intelligibility tests and spontaneous, conversational speech elicited using a referential communication task. The aim of this comparison is to draw attention to the differences in not just the signals, but the tasks of listeners perceiving these different speech signals and to highlight the implications of these differences for the interpretation and generalizability of speech intelligibility test results.
Collapse
Affiliation(s)
- Timothy Beechey
- Hearing Sciences-Scottish Section, School of Medicine, The University of Nottingham, Glasgow G31 2ER, United Kingdom
| |
Collapse
|
55
|
Ewert SD. A filter representation of diffraction at infinite and finite wedges. JASA EXPRESS LETTERS 2022; 2:092401. [PMID: 36182340 DOI: 10.1121/10.0013686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Diffraction of sound occurs at sound barriers, building and room corners in urban and indoor environments. Here, a unified parametric filter representation of the singly diffracted field at arbitrary wedges is suggested, connecting existing asymptotic and exact solutions in the framework of geometrical acoustics. Depending on the underlying asymptotic (high-frequency) solution, a combination of up to four half-order lowpass filters represents the diffracted field. Compact transfer function and impulse response expressions are proposed, providing errors below ±0.1 dB. To approximate the exact solution, a further asymptotic lowpass filter valid at low frequencies is suggested and combined with the high-frequency filter.
Collapse
Affiliation(s)
- Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
56
|
Boisvert I, Ferguson M, van Wieringen A, Ricketts TA. Editorial: Outcome Measures to Assess the Benefit of Interventions for Adults With Hearing Loss: From Research to Clinical Application. Front Neurosci 2022; 16:955189. [PMID: 36061602 PMCID: PMC9434333 DOI: 10.3389/fnins.2022.955189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 06/10/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Isabelle Boisvert
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW, Australia,*Correspondence: Isabelle Boisvert
| | - Melanie Ferguson
- Curtin enAble Institute, Faculty of Health Sciences, Curtin University, Perth, WA, Australia,Brain and Hearing Research Group, Ear Science Institute Australia, Perth, WA, Australia
| | - Astrid van Wieringen
- Research Group Experimental Oto-Rino-Laryngologie, Department of Neurosciences, University of Leuven, Leuven, Belgium
| | - Todd Andrew Ricketts
- Maddox Memorial Hearing Aid Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
57
|
Skoglund MA, Andersen M, Shiell MM, Keidser G, Rank ML, Rotger-Griful S. Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking: Accuracy, Calibration, and Speech Comprehension. Front Neurosci 2022; 16:873201. [PMID: 35844213 PMCID: PMC9279575 DOI: 10.3389/fnins.2022.873201] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/25/2022] [Indexed: 12/02/2022] Open
Abstract
This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.
Collapse
Affiliation(s)
- Martin A. Skoglund
- Division of Automatic Control, Department of Electrical Engineering, The Institute of Technology, Linköping University, Linkoping, Sweden
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
- *Correspondence: Martin A. Skoglund
| | | | - Martha M. Shiell
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
| | - Gitte Keidser
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
- Department of Behavioral Sciences and Learning, Linneaus Centre Head, Linköping University, Linkoping, Sweden
| | | | | |
Collapse
|
58
|
Shen J, Wu J. Speech Recognition in Noise Performance Measured Remotely Versus In-Laboratory From Older and Younger Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2391-2397. [PMID: 35442717 PMCID: PMC9567433 DOI: 10.1044/2022_jslhr-21-00557] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 12/23/2021] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. METHOD Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with 72 sentences. RESULTS While the younger remote group performed more poorly than the younger in-laboratory group, older participants' performance was comparable between the two modality groups, particularly in the easy to moderately difficult conditions. These results persisted after controlling for demographic variables (e.g., age, gender, and education). CONCLUSION While these findings generally support the feasibility of remote data collection with older participants for research on speech perception, they also suggest that technological proficiency is an important factor that affects performance on remote testing in the aging population.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA
| | - Jingwei Wu
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA
| |
Collapse
|
59
|
Hey M, Hersbach AA, Hocke T, Mauger SJ, Böhnke B, Mewes A. Ecological Momentary Assessment to Obtain Signal Processing Technology Preference in Cochlear Implant Users. J Clin Med 2022; 11:jcm11102941. [PMID: 35629065 PMCID: PMC9147494 DOI: 10.3390/jcm11102941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 05/13/2022] [Accepted: 05/20/2022] [Indexed: 02/01/2023] Open
Abstract
Background: To assess the performance of cochlear implant users, speech comprehension benefits are generally measured in controlled sound room environments of the laboratory. For field-based assessment of preference, questionnaires are generally used. Since questionnaires are typically administered at the end of an experimental period, they can be inaccurate due to retrospective recall. An alternative known as ecological momentary assessment (EMA) has begun to be used for clinical research. The objective of this study was to determine the feasibility of using EMA to obtain in-the-moment responses from cochlear implant users describing their technology preference in specific acoustic listening situations. Methods: Over a two-week period, eleven adult cochlear implant users compared two listening programs containing different sound processing technologies during everyday take-home use. Their task was to compare and vote for their preferred program. Results: A total of 205 votes were collected from acoustic environments that were classified into six listening scenes. The analysis yielded different patterns of voting among the subjects. Two subjects had a consistent preference for one sound processing technology across all acoustic scenes, three subjects changed their preference based on the acoustic scene, and six subjects had no conclusive preference for either technology. Conclusion: Results show that EMA is suitable for quantifying real-world self-reported preference, showing inter-subject variability in different listening environments. However, there is uncertainty that patients will not provide sufficient spontaneous feedback. One improvement for future research is a participant forced prompt to improve response rates.
Collapse
Affiliation(s)
- Matthias Hey
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
- Correspondence: ; Tel.: +49-431-500-21857
| | - Adam A. Hersbach
- Research and Development, Cochlear Limited, Melbourne, VIC 3000, Australia;
| | - Thomas Hocke
- Research, Cochlear Deutschland, 30625 Hannover, Germany;
| | | | - Britta Böhnke
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
| | - Alexander Mewes
- Audiology, ENT Clinic, UKSH, 24105 Kiel, Germany; (B.B.); (A.M.)
| |
Collapse
|
60
|
Miles K, Beechey T, Best V, Buchholz J. Measuring Speech Intelligibility and Hearing-Aid Benefit Using Everyday Conversational Sentences in Real-World Environments. Front Neurosci 2022; 16:789565. [PMID: 35368279 PMCID: PMC8970270 DOI: 10.3389/fnins.2022.789565] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Laboratory and clinical-based assessments of speech intelligibility must evolve to better predict real-world speech intelligibility. One way of approaching this goal is to develop speech intelligibility tasks that are more representative of everyday speech communication outside the laboratory. Here, we evaluate speech intelligibility using both a standard sentence recall task based on clear, read speech (BKB sentences), and a sentence recall task consisting of spontaneously produced speech excised from conversations which took place in realistic background noises (ECO-SiN sentences). The sentences were embedded at natural speaking levels in six realistic background noises that differed in their overall level, which resulted in a range of fixed signal-to-noise ratios. Ten young, normal hearing participants took part in the study, along with 20 older participants with a range of levels of hearing loss who were tested with and without hearing-aid amplification. We found that scores were driven by hearing loss and the characteristics of the background noise, as expected, but also strongly by the speech materials. Scores obtained with the more realistic sentences were generally lower than those obtained with the standard sentences, which reduced ceiling effects for the majority of environments/listeners (but introduced floor effects in some cases). Because ceiling and floor effects limit the potential for observing changes in performance, benefits of amplification were highly dependent on the speech materials for a given background noise and participant group. Overall, the more realistic speech task offered a better dynamic range for capturing individual performance and hearing-aid benefit across the range of real-world environments we examined.
Collapse
Affiliation(s)
- Kelly Miles
- ECHO Laboratory, Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Timothy Beechey
- Hearing Sciences – Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Jörg Buchholz
- ECHO Laboratory, Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
61
|
Salorio-Corbetto M, Williges B, Lamping W, Picinali L, Vickers D. Evaluating Spatial Hearing Using a Dual-Task Approach in a Virtual-Acoustics Environment. Front Neurosci 2022; 16:787153. [PMID: 35350560 PMCID: PMC8957784 DOI: 10.3389/fnins.2022.787153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/11/2022] [Indexed: 11/18/2022] Open
Abstract
Spatial hearing is critical for communication in everyday sound-rich environments. It is important to gain an understanding of how well users of bilateral hearing devices function in these conditions. The purpose of this work was to evaluate a Virtual Acoustics (VA) version of the Spatial Speech in Noise (SSiN) test, the SSiN-VA. This implementation uses relatively inexpensive equipment and can be performed outside the clinic, allowing for regular monitoring of spatial-hearing performance. The SSiN-VA simultaneously assesses speech discrimination and relative localization with changing source locations in the presence of noise. The use of simultaneous tasks increases the cognitive load to better represent the difficulties faced by listeners in noisy real-world environments. Current clinical assessments may require costly equipment which has a large footprint. Consequently, spatial-hearing assessments may not be conducted at all. Additionally, as patients take greater control of their healthcare outcomes and a greater number of clinical appointments are conducted remotely, outcome measures that allow patients to carry out assessments at home are becoming more relevant. The SSiN-VA was implemented using the 3D Tune-In Toolkit, simulating seven loudspeaker locations spaced at 30° intervals with azimuths between -90° and +90°, and rendered for headphone playback using the binaural spatialization technique. Twelve normal-hearing participants were assessed to evaluate if SSiN-VA produced patterns of responses for relative localization and speech discrimination as a function of azimuth similar to those previously obtained using loudspeaker arrays. Additionally, the effect of the signal-to-noise ratio (SNR), the direction of the shift from target to reference, and the target phonetic contrast on performance were investigated. SSiN-VA led to similar patterns of performance as a function of spatial location compared to loudspeaker setups for both relative localization and speech discrimination. Performance for relative localization was significantly better at the highest SNR than at the lowest SNR tested, and a target shift to the right was associated with an increased likelihood of a correct response. For word discrimination, there was an interaction between SNR and word group. Overall, these outcomes support the use of virtual audio for speech discrimination and relative localization testing in noise.
Collapse
Affiliation(s)
- Marina Salorio-Corbetto
- SOUND Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
- Audio Experience Design, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
- Wolfson College, Cambridge, United Kingdom
| | - Ben Williges
- SOUND Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Wiebke Lamping
- SOUND Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Lorenzo Picinali
- Audio Experience Design, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Deborah Vickers
- SOUND Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
62
|
Petersen EB, MacDonald EN, Josefine Munch Sørensen A. The Effects of Hearing-Aid Amplification and Noise on Conversational Dynamics Between Normal-Hearing and Hearing-Impaired Talkers. Trends Hear 2022; 26:23312165221103340. [PMID: 35862280 PMCID: PMC9310272 DOI: 10.1177/23312165221103340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 04/25/2022] [Accepted: 04/26/2022] [Indexed: 11/16/2022] Open
Abstract
There is a long-standing tradition to assess hearing-aid benefits using lab-based speech intelligibility tests. Towards a more everyday-like scenario, the current study investigated the effects of hearing-aid amplification and noise on face-to-face communication between two conversational partners. Eleven pairs, consisting of a younger normal-hearing (NH) and an older hearing-impaired (HI) participant, solved spot-the-difference tasks while their conversations were recorded. In a two-block randomized design, the tasks were solved in quiet or noise, both with and without the HI participant receiving hearing-aid amplification with active occlusion cancellation. In the presence of 70 dB SPL babble noise, participants had fewer, slower, and less well-timed turn-starts, while speaking louder with longer inter-pausal units (IPUs, stretches of continuous speech surrounded by silence) and reducing their articulation rates. All these changes are indicative of increased communication effort. The timing of turn-starts by the HI participants exhibited more variability than that of their NH conversational partners. In the presence of background noise, the timing of turn-starts by the HI participants became even more variable, and their NH partners spoke louder. When the HI participants were provided with hearing-aid amplification, their timing of turn-starts became faster, they increased their articulation rate, and they produced shorter IPUs, all indicating reduced communication effort. In conclusion, measures of the conversational dynamics showed that background noise increased the communication effort, especially for the HI participants, and that providing hearing-aid amplification caused the HI participant to behave more like their NH conversational partner, especially in quiet situations.
Collapse
Affiliation(s)
| | - Ewen N. MacDonald
- Hearing Systems Group, Dept. of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - A. Josefine Munch Sørensen
- Hearing Systems Group, Dept. of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
63
|
Kayser H, Herzke T, Maanen P, Zimmermann M, Grimm G, Hohmann V. Open community platform for hearing aid algorithm research: open Master Hearing Aid (openMHA). SOFTWAREX 2022; 17:100953. [PMID: 35465173 PMCID: PMC9022875 DOI: 10.1016/j.softx.2021.100953] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
open Master Hearing Aid (openMHA) was developed and provided to the hearing aid research community as an open-source software platform with the aim to support sustainable and reproducible research towards improvement and new types of assistive hearing systems not limited by proprietary software. The software offers a flexible framework that allows the users to conduct hearing aid research using tools and a number of signal processing plugins provided with the software as well as the implementation of own methods. The openMHA software is independent of a specific hardware and supports Linux, macOS and Windows operating systems as well as 32-bit and 64-bit ARM-based architectures such as used in small portable integrated systems. www.openmha.org.
Collapse
Affiliation(s)
- Hendrik Kayser
- Carl von Ossietzky Universität Oldenburg, Department of Medical Physics and Acoustics - Auditory Signal Processing and Hearing Devices, D-26111 Oldenburg, Germany
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| | - Tobias Herzke
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| | - Paul Maanen
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| | - Max Zimmermann
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| | - Giso Grimm
- Carl von Ossietzky Universität Oldenburg, Department of Medical Physics and Acoustics - Auditory Signal Processing and Hearing Devices, D-26111 Oldenburg, Germany
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| | - Volker Hohmann
- Carl von Ossietzky Universität Oldenburg, Department of Medical Physics and Acoustics - Auditory Signal Processing and Hearing Devices, D-26111 Oldenburg, Germany
- Hörzentrum Oldenburg gGmbH, Marie-Curie-Str. 2, 26129 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Germany
| |
Collapse
|
64
|
Heeren J, Nuesse T, Latzel M, Holube I, Hohmann V, Wagener KC, Schulte M. The Concurrent OLSA Test: A Method for Speech Recognition in Multi-talker Situations at Fixed SNR. Trends Hear 2022; 26:23312165221108257. [PMID: 35702051 PMCID: PMC9208053 DOI: 10.1177/23312165221108257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A multi-talker paradigm is introduced that uses different attentional processes to adjust speech-recognition scores with the goal of conducting measurements at high signal-to-noise ratios (SNR). The basic idea is to simulate a group conversation with three talkers. Talkers alternately speak sentences of the German matrix test OLSA. Each time a sentence begins with the name “Kerstin” (call sign), the participant is addressed and instructed to repeat the last words of all sentences from that talker, until another talker begins a sentence with “Kerstin”. The alternation of the talkers is implemented with an adjustable overlap time that causes an overlap between the call sign “Kerstin” and the target words to be repeated. Thus, the two tasks of detecting “Kerstin” and repeating target words are to be done at the same time. The paradigm was tested with 22 young normal-hearing participants (YNH) for three overlap times (0.6 s, 0.8 s, 1.0 s). Results for these overlap times show significant differences, with median target word recognition scores of 88%, 82%, and 77%, respectively (including call-sign and dual-task effects). A comparison of the dual task with the corresponding single tasks suggests that the observed effects reflect an increased cognitive load.
Collapse
Affiliation(s)
- Jan Heeren
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany.,Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Theresa Nuesse
- Cluster of Excellence Hearing4All, Oldenburg, Germany.,Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | | | - Inga Holube
- Cluster of Excellence Hearing4All, Oldenburg, Germany.,Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | - Volker Hohmann
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany.,Cluster of Excellence Hearing4All, Oldenburg, Germany.,Auditory Signal Processing, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Kirsten C Wagener
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany.,Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Michael Schulte
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany.,Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
65
|
Smeds K, Larsson J, Dahlquist M, Wolters F, Herrlin P. Live Evaluation of Auditory Preference, a Laboratory Test for Evaluating Auditory Preference. J Am Acad Audiol 2021; 32:487-500. [PMID: 34965595 DOI: 10.1055/s-0041-1735213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Many laboratory tests are performed under unrealistic conditions. Tasks, such as repeating words or sentences, are performed in simple loudspeaker setups. Currently, many research groups focus on realistic audiovisual laboratory setups. Fewer groups focus on the tasks performed during testing. PURPOSE A semicontrolled laboratory test method focusing on the tasks performed, the Live Evaluation of Auditory Preference (LEAP) was evaluated. LEAP is developed to evaluate hearing-instrument performance in test scenarios that represent everyday listening situations. RESEARCH DESIGN LEAP was evaluated in a feasibility study. The method comprises conversations between a test participant and one or two test leaders, enabling evaluation of the test participant's own voice. The method allows for visual cues (when relevant) and introduce social pressure to participate in the conversation. In addition, other everyday listening tasks, such as watching television (TV) and listening to radio, are included. In this study, LEAP was used to assess preference for two hearing aid settings using paired comparisons. STUDY SAMPLE Nineteen experienced hearing aid users (13 females and 6 males; mean age 74 years), participated in the study. DATA COLLECTION AND ANALYSIS LEAP was performed at three visits to the laboratory. In addition, participants conducted a field trial where the two hearing aid programs were compared using Ecological Momentary Assessments (EMA). During LEAP testing, six mandatory test cases were used, representing commonly occurring everyday listening situations. Individual test cases were also included, selected from individually experienced listening situations during the field trial. Within- and between-session reliability of the LEAP test was investigated. Validity was investigated by comparing the LEAP and the EMA results. RESULTS For the current signal-processing evaluation, the test was judged to have acceptable reliability and validity. The inclusion of individually selected test cases increased the representativeness of the LEAP test, but it did not substantially alter the results in the current study. CONCLUSION LEAP in its current implementation seems suitable for investigating signal-processing preference in the laboratory in a way that is indicative of everyday preference. The LEAP method represents one step forward in bringing the real world into the laboratory.
Collapse
|
66
|
Kirsch C, Poppitz J, Wendt T, van de Par S, Ewert SD. Spatial Resolution of Late Reverberation in Virtual Acoustic Environments. Trends Hear 2021; 25:23312165211054924. [PMID: 34935544 PMCID: PMC8721423 DOI: 10.1177/23312165211054924] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Late reverberation involves the superposition of many sound reflections, approaching the properties of a diffuse sound field. Since the spatially resolved perception of individual late reflections is impossible, simplifications can potentially be made for modelling late reverberation in room acoustics simulations with reduced spatial resolution. Such simplifications are desired for interactive, real-time virtual acoustic environments with applications in hearing research and for the evaluation of hearing supportive devices. In this context, the number and spatial arrangement of loudspeakers used for playback additionally affect spatial resolution. The current study assessed the minimum number of spatially evenly distributed virtual late reverberation sources required to perceptually approximate spatially highly resolved isotropic and anisotropic late reverberation and to technically approximate a spherically isotropic sound field. The spatial resolution of the rendering was systematically reduced by using subsets of the loudspeakers of an 86-channel spherical loudspeaker array in an anechoic chamber, onto which virtual reverberation sources were mapped using vector base amplitude panning. It was tested whether listeners can distinguish lower spatial resolutions of reproduction of late reverberation from the highest achievable spatial resolution in different simulated rooms. The rendering of early reflections remained unchanged. The coherence of the sound field across a pair of microphones at ear and behind-the-ear hearing device distance was assessed to separate the effects of number of virtual sources and loudspeaker array geometry. Results show that between 12 and 24 reverberation sources are required for the rendering of late reverberation in virtual acoustic environments.
Collapse
Affiliation(s)
- Christoph Kirsch
- Medizinische Physik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Josef Poppitz
- Akustik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Torben Wendt
- Medizinische Physik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.,Akustik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Steven van de Par
- Akustik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4All, 385626Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
67
|
Parmar BJ, Mehta K, Vickers DA, Bizley JK. Experienced hearing aid users' perspectives of assessment and communication within audiology: a qualitative study using digital methods. Int J Audiol 2021; 61:956-964. [PMID: 34821527 DOI: 10.1080/14992027.2021.1998839] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE To explore experienced hearing aid users' perspectives of audiological assessments and the patient-audiologist communication dynamic during clinical interactions. DESIGN A qualitative study was implemented incorporating both an online focus group and online semi-structured interviews. Sessions were audio-recorded and transcribed verbatim. Iterative-inductive thematic analysis was carried out to identify themes related to assessment and communication within audiology practice. STUDY SAMPLES Seven experienced hearing aid users took part in an online focus group and 14 participated in online semi-structured interviews (age range: 22 - 86 years; 9 males, 11 females). RESULTS Themes related to assessment included the unaided and aided testing procedure and relating tests to real world hearing difficulties. Themes related to communication included the importance of deaf aware communication strategies, explanation of test results and patient centred care in audiology. CONCLUSION To ensure hearing aid services meet the needs of the service users, we should explore user perspectives and proactively adapt service delivery. This approach should be ongoing, in response to advances in hearing aid technology. Within audiology, experienced hearing aid users' value (1) comprehensive, relatable hearing assessment, (2) deaf aware patient-audiologist communication, (3) accessible services and (4) a personalised approach to recommend suitable technology and address patient specific aspects of hearing loss.
Collapse
Affiliation(s)
| | - Kinjal Mehta
- St Ann's Hospital, Whittington Health NHS Trust, London, UK
| | - Deborah A Vickers
- Sound Lab, Cambridge Hearing Group, University of Cambridge, Cambridge, UK
| | | |
Collapse
|
68
|
Van Canneyt J, Wouters J, Francart T. Cortical compensation for hearing loss, but not age, in neural tracking of the fundamental frequency of the voice. J Neurophysiol 2021; 126:791-802. [PMID: 34232756 DOI: 10.1152/jn.00156.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Auditory processing is affected by advancing age and hearing loss, but the underlying mechanisms are still unclear. We investigated the effects of age and hearing loss on temporal processing of naturalistic stimuli in the auditory system. We used a recently developed objective measure for neural phase-locking to the fundamental frequency of the voice (f0) which uses continuous natural speech as a stimulus, that is, "f0-tracking." The f0-tracking responses from 54 normal-hearing and 14 hearing-impaired adults of varying ages were analyzed. The responses were evoked by a Flemish story with a male talker and contained contributions from both subcortical and cortical sources. Results indicated that advancing age was related to smaller responses with less cortical response contributions. This is consistent with an age-related decrease in neural phase-locking ability at frequencies in the range of the f0, possibly due to decreased inhibition in the auditory system. Conversely, hearing-impaired subjects displayed larger responses compared with age-matched normal-hearing controls. This was due to additional cortical response contributions in the 38- to 50-ms latency range, which were stronger for participants with more severe hearing loss. This is consistent with hearing-loss-induced cortical reorganization and recruitment of additional neural resources to aid in speech perception.NEW & NOTEWORTHY Previous studies disagree on the effects of age and hearing loss on the neurophysiological processing of the fundamental frequency of the voice (f0), in part due to confounding effects. Using a novel electrophysiological technique, natural speech stimuli, and controlled study design, we quantified and disentangled the effects of age and hearing loss on neural f0 processing. We uncovered evidence for underlying neurophysiological mechanisms, including a cortical compensation mechanism for hearing loss, but not for age.
Collapse
Affiliation(s)
| | - Jan Wouters
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Tom Francart
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
69
|
Hohmann V, Paluch R, Krueger M, Meis M, Grimm G. The Virtual Reality Lab: Realization and Application of Virtual Sound Environments. Ear Hear 2021; 41 Suppl 1:31S-38S. [PMID: 33105257 PMCID: PMC7676619 DOI: 10.1097/aud.0000000000000945] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022]
Abstract
To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article introduces and discusses the notion of the communication loop as a theoretical basis for understanding the factors that are relevant for acoustic communication in real life. From this, requirements are derived that allow an assessment of the extent to which a virtual reality lab reflects these factors, and which may be used as a proxy for ecological validity. The most important factor of real-life communication identified is a closed communication loop among the actively behaving participants. The second part of the article gives an overview of the current developments towards a virtual reality lab at Oldenburg University that aims at interactive and reproducible testing of subjects with and without hearing devices in challenging communication conditions. The extent to which the virtual reality lab in its current state meets the requirements defined in the first part is discussed, along with its limitations and potential further developments. Finally, data are presented from a qualitative study that compared subject behavior and performance in two audiovisual environments presented in the virtual reality lab-a street and a cafeteria-with the corresponding field environments. The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.
Collapse
Affiliation(s)
- Volker Hohmann
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- HörTech gGmbH, Oldenburg, Germany
- Cluster of Excellence “Hearing4all,” Oldenburg, Germany
| | - Richard Paluch
- Cluster of Excellence “Hearing4all,” Oldenburg, Germany
- Department of Social Sciences, University of Oldenburg, Oldenburg, Germany
| | - Melanie Krueger
- HörTech gGmbH, Oldenburg, Germany
- Cluster of Excellence “Hearing4all,” Oldenburg, Germany
| | - Markus Meis
- Cluster of Excellence “Hearing4all,” Oldenburg, Germany
- Hörzentrum Oldenburg GmbH, Oldenburg, Germany
| | - Giso Grimm
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- HörTech gGmbH, Oldenburg, Germany
- Cluster of Excellence “Hearing4all,” Oldenburg, Germany
| |
Collapse
|
70
|
Lunner T, Alickovic E, Graversen C, Ng EHN, Wendt D, Keidser G. Three New Outcome Measures That Tap Into Cognitive Processes Required for Real-Life Communication. Ear Hear 2021; 41 Suppl 1:39S-47S. [PMID: 33105258 PMCID: PMC7676869 DOI: 10.1097/aud.0000000000000941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 07/11/2020] [Indexed: 11/29/2022]
Abstract
To increase the ecological validity of outcomes from laboratory evaluations of hearing and hearing devices, it is desirable to introduce more realistic outcome measures in the laboratory. This article presents and discusses three outcome measures that have been designed to go beyond traditional speech-in-noise measures to better reflect realistic everyday challenges. The outcome measures reviewed are: the Sentence-final Word Identification and Recall (SWIR) test that measures working memory performance while listening to speech in noise at ceiling performance; a neural tracking method that produces a quantitative measure of selective speech attention in noise; and pupillometry that measures changes in pupil dilation to assess listening effort while listening to speech in noise. According to evaluation data, the SWIR test provides a sensitive measure in situations where speech perception performance might be unaffected. Similarly, pupil dilation has also shown sensitivity in situations where traditional speech-in-noise measures are insensitive. Changes in working memory capacity and effort mobilization were found at positive signal-to-noise ratios (SNR), that is, at SNRs that might reflect everyday situations. Using stimulus reconstruction, it has been demonstrated that neural tracking is a robust method at determining to what degree a listener is attending to a specific talker in a typical cocktail party situation. Using both established and commercially available noise reduction schemes, data have further shown that all three measures are sensitive to variation in SNR. In summary, the new outcome measures seem suitable for testing hearing and hearing devices under more realistic and demanding everyday conditions than traditional speech-in-noise tests.
Collapse
Affiliation(s)
- Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
- Department of Electrical Engineering, Division Automatic Control, Linköping University, Linköping, Sweden
- Department of Health Technology, Hearing Systems, Technical University of Denmark, Lyngby, Denmark
| | - Emina Alickovic
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Division Automatic Control, Linköping University, Linköping, Sweden
| | | | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
- Oticon A/S, Kongebakken, Denmark
| | - Dorothea Wendt
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Health Technology, Hearing Systems, Technical University of Denmark, Lyngby, Denmark
| | - Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| |
Collapse
|
71
|
Potential of Augmented Reality Platforms to Improve Individual Hearing Aids and to Support More Ecologically Valid Research. Ear Hear 2021; 41 Suppl 1:140S-146S. [PMID: 33105268 PMCID: PMC7676615 DOI: 10.1097/aud.0000000000000961] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
An augmented reality (AR) platform combines several technologies in a system that can render individual “digital objects” that can be manipulated for a given purpose. In the audio domain, these may, for example, be generated by speaker separation, noise suppression, and signal enhancement. Access to the “digital objects” could be used to augment auditory objects that the user wants to hear better. Such AR platforms in conjunction with traditional hearing aids may contribute to closing the gap for people with hearing loss through multimodal sensor integration, leveraging extensive current artificial intelligence research, and machine-learning frameworks. This could take the form of an attention-driven signal enhancement and noise suppression platform, together with context awareness, which would improve the interpersonal communication experience in complex real-life situations. In that sense, an AR platform could serve as a frontend to current and future hearing solutions. The AR device would enhance the signals to be attended, but the hearing amplification would still be handled by hearing aids. In this article, suggestions are made about why AR platforms may offer ideal affordances to compensate for hearing loss, and how research-focused AR platforms could help toward better understanding of the role of hearing in everyday life.
Collapse
|
72
|
Abstract
The objective of this study was to obtain a normative database of speech intelligibility data for young normal-hearing listeners communicating in public spaces. A total of 174 listeners participated in an interactive speech intelligibility task that required four-person groups to conduct a live version of the Modified Rhyme Test in noisy public spaces. The public spaces tested included a college library, a college cafeteria, a casual dining restaurant during lunch hour, and a crowded bar during happy hour. At the start of each trial, one of the participants was randomly selected as the talker, and a tablet computer was used to prompt them to say a word aloud from the Modified Rhyme Test. Then, the other three participants were required to select this word from one of six rhyming alternatives displayed on three other tablet computers. The tablet computers were also used to record the SPL at each listener location during and after the interval where the target talker was speaking. These SPL measurements were used to estimate the signal-to-noise ratio (SNR) in each trial of the experiment. As expected, the results show that speech intelligibility decreases, response time increases, and perceived difficulty increases as the background noise level increases. There was also a systematic decrease in SNR with increasing background noise, with SNR decreasing 0.44 dB for every 1 dB increase in ambient noise level above 60 dB. Overall, the results of this study have demonstrated how low-cost tablet computer-based data collection systems can be used to collect live-talker speech intelligibility data in real-world environments. We believe these techniques could be adapted for use in future studies focused on obtaining ecologically valid assessments of the effects of age, hearing impairment, amplification, and other factors on speech intelligibility performance in real-world environments.
Collapse
|
73
|
Abstract
OBJECTIVES The aim of this study was to modify a speech perception in noise test to assess whether the presence of another individual (copresence), relative to being alone, affected listening performance and effort expenditure. Furthermore, this study assessed if the effect of the other individual's presence on listening effort was influenced by the difficulty of the task and whether participants had to repeat the sentences they listened to or not. DESIGN Thirty-four young, normal-hearing participants (mean age: 24.7 years) listened to spoken Dutch sentences that were masked with a stationary noise masker and presented through a loudspeaker. The participants alternated between repeating sentences (active condition) and not repeating sentences (passive condition). They did this either alone or together with another participant in the booth. When together, participants took turns repeating sentences. The speech-in-noise test was performed adaptively at three intelligibility levels (20%, 50%, and 80% sentences correct) in a block-wise fashion. During testing, pupil size was recorded as an objective outcome measure of listening effort. RESULTS Lower speech intelligibility levels were associated with increased peak pupil dilation (PPDs) and doing the task in the presence of another individual (compared with doing it alone) significantly increased PPD. No interaction effect between intelligibility and copresence on PPD was found. The results suggested that the change of PPD between doing the task alone or together was especially apparent for people who started the experiment in the presence of another individual. Furthermore, PPD was significantly lower during passive listening, compared with active listening. Finally, it seemed that performance was unaffected by copresence. CONCLUSION The increased PPDs during listening in the presence of another participant suggest that more effort was invested during the task. However, it seems that the additional effort did not result in a change of performance. This study showed that at least one aspect of the social context of a listening situation (in this case copresence) can affect listening effort, indicating that social context might be important to consider in future cognitive hearing research.
Collapse
|
74
|
Yancey CM, Barrett ME, Gordon-Salant S, Brungart DS. Binaural advantages in a real-world environment on speech intelligibility, response time, and subjective listening difficulty. JASA EXPRESS LETTERS 2021; 1:014406. [PMID: 36154099 DOI: 10.1121/10.0003193] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This study examined the speech-related advantages of binaural listening for individuals conversing in a noisy restaurant. Young, normal-hearing adults were tested in groups of four during monaural and binaural listening conditions. Monosyllabic word stimuli were presented in a closed-set format. Speech intelligibility, response time (RT), and self-reported difficulty were measured. Results showed a speech intelligibility advantage of 17%, a 0.26 s decrease in RT, and a reduction in reported difficulty in binaural compared to monaural listening. These data suggest the binaural advantage obtained in real-world settings compares favorably with that observed in the laboratory, indicating that speech testing in laboratories approximates real-world performance.
Collapse
Affiliation(s)
- Calli M Yancey
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Mary E Barrett
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Douglas S Brungart
- Walter Reed National Military Medical Center, Bethesda, Maryland 20814, USA , , ,
| |
Collapse
|
75
|
Editorial: Eriksholm Workshop on Ecologically Valid Assessments of Hearing and Hearing Devices. Ear Hear 2020; 41 Suppl 1:1S-4S. [PMID: 33105254 DOI: 10.1097/aud.0000000000000933] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
76
|
|
77
|
Ecological Momentary Assessment in Hearing Research: Current State, Challenges, and Future Directions. Ear Hear 2020; 41 Suppl 1:79S-90S. [DOI: 10.1097/aud.0000000000000934] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
78
|
|
79
|
|
80
|
|
81
|
Frameworks for Change in Hearing Research: Valuing Qualitative Methods in the Real World. Ear Hear 2020; 41 Suppl 1:91S-98S. [DOI: 10.1097/aud.0000000000000932] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
82
|
Conversational Interaction Is the Brain in Action: Implications for the Evaluation of Hearing and Hearing Interventions. Ear Hear 2020; 41 Suppl 1:56S-67S. [DOI: 10.1097/aud.0000000000000939] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
83
|
|
84
|
Physiological Monitoring and Hearing Loss: Toward a More Integrated and Ecologically Validated Health Mapping. Ear Hear 2020; 41 Suppl 1:120S-130S. [DOI: 10.1097/aud.0000000000000960] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|