1
|
Lladó P, Hyvärinen P, Pulkki V. The impact of head-worn devices in an auditory-aided visual search task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2460-2469. [PMID: 38578178 DOI: 10.1121/10.0025542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.
Collapse
Affiliation(s)
- Pedro Lladó
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Petteri Hyvärinen
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Ville Pulkki
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| |
Collapse
|
2
|
Kumar S, Nayak S, Kanagokar V, Pitchai Muthu AN. Does bilateral hearing aid fitting improve spatial hearing ability: a systematic review and meta-analysis. Disabil Rehabil Assist Technol 2024:1-13. [PMID: 38385777 DOI: 10.1080/17483107.2024.2316293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 02/03/2024] [Indexed: 02/23/2024]
Abstract
Objectives: The ability to localize sound sources is crucial for everyday listening, as it contributes to spatial awareness and the detection of warning signs. Individuals with hearing impairment have poorer localization abilities, which further deteriorate when they are fitted with a hearing aid. Although numerous studies have addressed this phenomenon, there is a lack of systematic evidence. The aim of the current systematic review is to address the following research question, "Do behavioural measures of spatial hearing ability improve with bilateral hearing aid fitting compared to the unaided hearing condition?"Design: A comprehensive search was conducted by two independent authors utilizing electronic databases, using various electronic databases, covering the period of 1965 to 2022. The inclusion and exclusion criteria were formulated using the Population, Intervention, Compression, Outcome, and Study design (PICOS) format, and the certainty of evidence was determined using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guidelines.Results: The comprehensive search resulted in 2199 studies, 17 studies for qualitative synthesis and 15 studies for quantitative synthesis. The collected data was divided into two groups, namely vertical and horizontal localization. The results of the quantitative analysis indicate that the localization performance was significantly better in the unaided condition for both vertical and horizontal planes. The certainty of our evidence was judged to be moderate, meaning that "we are moderately confident in the effect estimate. The true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different".Conclusion: The review findings demonstrate that the bilateral fitting of the hearing aid did not effectively preserve spatial cues, which resulted in poorer localization performance irrespective of the plane of assessment.Review Registration: Prospective Register of Systematic Reviews (PROSPERO); CRD42022358164.
Collapse
Affiliation(s)
- Sathish Kumar
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Srikanth Nayak
- Department of Audiology and Speech-Language Pathology, Yenepoya Medical College, Yenepoya University (Deemed to be University), Mangalore, India
| | - Vibha Kanagokar
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Arivudai Nambi Pitchai Muthu
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
3
|
Denk F, Wiederschein L, Kemper M, Husstedt H. (Why) Do Transparent Hearing Devices Impair Speech Perception in Collocated Noise? Trends Hear 2024; 28:23312165241246597. [PMID: 38629486 PMCID: PMC11025430 DOI: 10.1177/23312165241246597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 03/05/2024] [Accepted: 03/26/2024] [Indexed: 04/19/2024] Open
Abstract
Hearing aids and other hearing devices should provide the user with a benefit, for example, compensate for effects of a hearing loss or cancel undesired sounds. However, wearing hearing devices can also have negative effects on perception, previously demonstrated mostly for spatial hearing, sound quality and the perception of the own voice. When hearing devices are set to transparency, that is, provide no gain and resemble open-ear listening as well as possible, these side effects can be studied in isolation. In the present work, we conducted a series of experiments that are concerned with the effect of transparent hearing devices on speech perception in a collocated speech-in-noise task. In such a situation, listening through a hearing device is not expected to have any negative effect, since both speech and noise undergo identical processing, such that the signal-to-noise ratio at ear is not altered and spatial effects are irrelevant. However, we found a consistent hearing device disadvantage for speech intelligibility and similar trends for rated listening effort. Several hypotheses for the possible origin for this disadvantage were tested by including several different devices, gain settings and stimulus levels. While effects of self-noise and nonlinear distortions were ruled out, the exact reason for a hearing device disadvantage on speech perception is still unclear. However, a significant relation to auditory model predictions demonstrate that the speech intelligibility disadvantage is related to sound quality, and is most probably caused by insufficient equalization, artifacts of frequency-dependent signal processing and processing delays.
Collapse
Affiliation(s)
- Florian Denk
- German Institute of Hearing Aids, Lübeck, Germany
| | | | - Markus Kemper
- German Institute of Hearing Aids, Lübeck, Germany
- Department of Psychology and Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Deutschland
| | | |
Collapse
|
4
|
Ausili SA, Snapp HA. Contralateral Routing of Signal Disrupts Monaural Sound Localization. Audiol Res 2023; 13:586-599. [PMID: 37622927 PMCID: PMC10451350 DOI: 10.3390/audiolres13040051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 07/10/2023] [Accepted: 07/26/2023] [Indexed: 08/26/2023] Open
Abstract
OBJECTIVES In the absence of binaural hearing, individuals with single-sided deafness can adapt to use monaural level and spectral cues to improve their spatial hearing abilities. Contralateral routing of signal is the most common form of rehabilitation for individuals with single-sided deafness. However, little is known about how these devices affect monaural localization cues, which single-sided deafness listeners may become reliant on. This study aimed to investigate the effects of contralateral routing of signal hearing aids on localization performance in azimuth and elevation under monaural listening conditions. DESIGN Localization was assessed in 10 normal hearing adults under three listening conditions: (1) normal hearing (NH), (2) unilateral plug (NH-plug), and (3) unilateral plug and CROS aided (NH-plug + CROS). Monaural hearing simulation was achieved by plugging the ear with E-A-Rsoft™ FX™ foam earplugs. Stimuli consisted of 150 ms high-pass noise bursts (3-20 kHz), presented in a random order from fifty locations spanning ±70° in the horizontal and ±30° in the vertical plane at 45, 55, and 65 dBA. RESULTS In the unilateral plugged listening condition, participants demonstrated good localization in elevation and a response bias in azimuth for signals directed at the open ear. A significant decrease in performance in elevation occurs with the contralateral routing of signal hearing device on, evidenced by significant reductions in response gain and low r2 value. Additionally, performance in azimuth is further reduced for contralateral routing of signal aided localization compared to the simulated unilateral hearing loss condition. Use of the contralateral routing of signal device also results in a reduction in promptness of the listener response and an increase in response variability. CONCLUSIONS Results suggest contralateral routing of signal hearing aids disrupt monaural spectral and level cues, which leads to detriments in localization performance in both the horizontal and vertical dimensions. Increased reaction time and increasing variability in responses suggests localization is more effortful when wearing the contralateral rerouting of signal device.
Collapse
Affiliation(s)
- Sebastian A. Ausili
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 Nijmegen, The Netherlands
- Department of Otolaryngology, University of Miami, 1120 NW 14th Street, 5th Floor, Miami, FL 33136, USA
| | - Hillary A. Snapp
- Department of Otolaryngology, University of Miami, 1120 NW 14th Street, 5th Floor, Miami, FL 33136, USA
| |
Collapse
|
5
|
Stone MA, Lough M, Kühnel V, Biggins AE, Whiston H, Dillon H. Perceived Sound Quality of Hearing Aids With Varying Placements of Microphone and Receiver. Am J Audiol 2023; 32:135-149. [PMID: 36580494 PMCID: PMC10166191 DOI: 10.1044/2022_aja-22-00061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 06/24/2022] [Accepted: 10/04/2022] [Indexed: 12/30/2022] Open
Abstract
PURPOSE Perceived sound quality was variously compared between either no aiding or aiding with three models of hearing aid that varied the microphone position around the pinna, depth of the receiver in the auditory meatus, degree of meatal occlusion, and processing sophistication. The hearing aids were modern designs and commercially available at the time of testing. METHOD Binaural recordings of multichannel spatially separated speech and music excerpts were made in a manikin, either open ear or aided. Recordings were presented offline over wide-bandwidth, high-quality insert earphones. Participants listened to pairs of the recordings and made preference ratings both by clarity and externality (a proxy for "spaciousness"). Two separate groups of adults were tested, 20 with audiometrically normal hearing (NH) and 20 with mild-to-moderate sensorineural hearing loss (hearing impaired [HI]). RESULTS For ratings of speech clarity, the NH group expressed no preference between the open ear and a deeply inserted occluding aid, both of which were preferred to a low-pass filtered output of the same aid. For the music signal, a small preference emerged for the open-ear recording over that of the aid. For the HI group, clarity of the deeply inserted aid was similar to in-the-ear and behind-the-ear devices for speech, but worse for music. Ratings of spaciousness produced no clear result in either group, which can be attributed to study limitations and/or participant factors. CONCLUSION Based on clarity, a wide bandwidth, particularly to beyond 5 kHz generally and below 300 Hz for music, is desirable, independent of hearing aid design.
Collapse
Affiliation(s)
- Michael A. Stone
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
- Hearing Device Research Centre, Hearing Health, National Institute of Health Research Manchester Biomedical Research Centre, United Kingdom
| | - Melanie Lough
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
- Hearing Device Research Centre, Hearing Health, National Institute of Health Research Manchester Biomedical Research Centre, United Kingdom
| | | | | | - Helen Whiston
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
- Hearing Device Research Centre, Hearing Health, National Institute of Health Research Manchester Biomedical Research Centre, United Kingdom
| | - Harvey Dillon
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
- Hearing Device Research Centre, Hearing Health, National Institute of Health Research Manchester Biomedical Research Centre, United Kingdom
- Department of Linguistics, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Nisha KV, Uppunda AK, Kumar RT. Spatial rehabilitation using virtual auditory space training paradigm in individuals with sensorineural hearing impairment. Front Neurosci 2023; 16:1080398. [PMID: 36733923 PMCID: PMC9887142 DOI: 10.3389/fnins.2022.1080398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/20/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose The present study aimed to quantify the effects of spatial training using virtual sources on a battery of spatial acuity measures in listeners with sensorineural hearing impairment (SNHI). Methods An intervention-based time-series comparison design involving 82 participants divided into three groups was adopted. Group I (n = 27, SNHI-spatially trained) and group II (n = 25, SNHI-untrained) consisted of SNHI listeners, while group III (n = 30) had listeners with normal hearing (NH). The study was conducted in three phases. In the pre-training phase, all the participants underwent a comprehensive assessment of their spatial processing abilities using a battery of tests including spatial acuity in free-field and closed-field scenarios, tests for binaural processing abilities (interaural time threshold [ITD] and level difference threshold [ILD]), and subjective ratings. While spatial acuity in the free field was assessed using a loudspeaker-based localization test, the closed-field source identification test was performed using virtual stimuli delivered through headphones. The ITD and ILD thresholds were obtained using a MATLAB psychoacoustic toolbox, while the participant ratings on the spatial subsection of speech, spatial, and qualities questionnaire in Kannada were used for the subjective ratings. Group I listeners underwent virtual auditory spatial training (VAST), following pre-evaluation assessments. All tests were re-administered on the group I listeners halfway through training (mid-training evaluation phase) and after training completion (post-training evaluation phase), whereas group II underwent these tests without any training at the same time intervals. Results and discussion Statistical analysis showed the main effect of groups in all tests at the pre-training evaluation phase, with post hoc comparisons that revealed group equivalency in spatial performance of both SNHI groups (groups I and II). The effect of VAST in group I was evident on all the tests, with the localization test showing the highest predictive power for capturing VAST-related changes on Fischer discriminant analysis (FDA). In contrast, group II demonstrated no changes in spatial acuity across timelines of measurements. FDA revealed increased errors in the categorization of NH as SNHI-trained at post-training evaluation compared to pre-training evaluation, as the spatial performance of the latter improved with VAST in the post-training phase. Conclusion The study demonstrated positive outcomes of spatial training using VAST in listeners with SNHI. The utility of this training program can be extended to other clinical population with spatial auditory processing deficits such as auditory neuropathy spectrum disorder, cochlear implants, central auditory processing disorders etc.
Collapse
|
7
|
Sheffield SW, Wheeler HJ, Brungart DS, Bernstein JGW. The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment. Trends Hear 2023; 27:23312165231186040. [PMID: 37415497 PMCID: PMC10331332 DOI: 10.1177/23312165231186040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/13/2023] [Accepted: 06/17/2023] [Indexed: 07/08/2023] Open
Abstract
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| | - Harley J. Wheeler
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas S. Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| |
Collapse
|
8
|
Voss SC, Pichora-Fuller MK, Ishida I, Pereira A, Seiter J, El Guindi N, Kuehnel V, Qian J. Evaluating the benefit of hearing aids with motion-based beamformer adaptation in a real-world setup. Int J Audiol 2021; 61:642-654. [PMID: 34369262 DOI: 10.1080/14992027.2021.1948120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE Conventional directional hearing aid microphone technology may obstruct listening intentions when the talker and listener walk side by side. The purpose of the current study was to evaluate hearing aids that use a motion sensor to address listening needs during walking. DESIGN Each participant completed two walks in randomised order, one walk with each of two hearing aid programs: (1) conventional beamformer adaptation that activated an adaptive, multiband beamformer in loud environments and (2) motion-based beamformer adaptation that activated a pinna-mimicking microphone setting when walking was detected. Participants walked along a pre-defined track and completed tasks assessing speech understanding and environmental awareness. STUDY SAMPLE Participants were 22 older adults with moderate-to-severe hearing loss and experience using hearing aids. RESULTS More participants preferred the motion-based than conventional beamformer adaptation for speech understanding, environmental awareness, overall listening, and sound quality (p < 0.05). Measures of speech understanding (p < 0.01) and localisation of sound stimuli (p < 0.05) were significantly better with motion-based than conventional beamformer adaptation. CONCLUSIONS The results suggest that hearing aid users can benefit from beamforming that uses motion sensor input to adapt the signal processing according to the user's activity. The real-world setup of this study had limitations.
Collapse
Affiliation(s)
- Solveig C Voss
- Innovation Centre Toronto, Sonova Canada Inc, Mississauga, Canada
| | | | - Ieda Ishida
- Innovation Centre Toronto, Sonova Canada Inc, Mississauga, Canada
| | - April Pereira
- Department of Psychology, University of Toronto, Mississauga, Canada.,Department of Psychology, University of Waterloo, Waterloo, Canada
| | | | | | | | - Jinyu Qian
- Innovation Centre Toronto, Sonova Canada Inc, Mississauga, Canada.,Department of Communicative Disorders and Sciences, University at Buffalo, State University of New York, Buffalo, NY, USA
| |
Collapse
|
9
|
Alam F, Usman M, Alkhammash HI, Wajid M. Improved Direction-of-Arrival Estimation of an Acoustic Source Using Support Vector Regression and Signal Correlation. SENSORS 2021; 21:s21082692. [PMID: 33920360 PMCID: PMC8070369 DOI: 10.3390/s21082692] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/05/2021] [Accepted: 04/09/2021] [Indexed: 11/16/2022]
Abstract
The direction-of-arrival (DoA) estimation of an acoustic source can be estimated with a uniform linear array using classical techniques such as generalized cross-correlation, beamforming, subspace techniques, etc. However, these methods require a search in the angular space and also have a higher angular error at the end-fire. In this paper, we propose the use of regression techniques to improve the results of DoA estimation at all angles including the end-fire. The proposed methodology employs curve-fitting on the received multi-channel microphone signals, which when applied in tandem with support vector regression (SVR) provides a better estimation of DoA as compared to the conventional techniques and other polynomial regression techniques. A multilevel regression technique is also proposed, which further improves the estimation accuracy at the end-fire. This multilevel regression technique employs the use of linear regression over the results obtained from SVR. The techniques employed here yielded an overall 63% improvement over the classical generalized cross-correlation technique.
Collapse
Affiliation(s)
- Faisal Alam
- Department of Computer Engineering, Z.H.C.E.T., Aligarh Muslim University, Aligarh 202002, India;
| | - Mohammed Usman
- Department of Electrical Engineering, King Khalid University, Abha 61411, Saudi Arabia;
| | - Hend I. Alkhammash
- Department of Electrical Engineering, College of Engineering, Taif University, Taif 21944, Saudi Arabia;
| | - Mohd Wajid
- Department of Electronics Engineering, Z.H.C.E.T., Aligarh Muslim University, Aligarh 202002, India
- Correspondence: ; Tel.: +91-571-270-0922 (ext. 3147)
| |
Collapse
|
10
|
Jactat B. Mechanics of the Peripheral Auditory System: Foundations for Embodied Listening Using Dynamic Systems Theory and the Coupling Devices as a Metaphor. F1000Res 2021; 10:193. [PMID: 34249336 PMCID: PMC8258707 DOI: 10.12688/f1000research.51125.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/26/2021] [Indexed: 11/20/2022] Open
Abstract
Current approaches to listening are built on standard cognitive science, which considers the brain as the locus of all cognitive activity. This work aims to investigate listening as phenomena occurring within a brain, a body (embodiment), and an environment (situatedness). Drawing on insights from physiology, acoustics, and audiology, this essay presents listening as an interdependent brain-body-environment construct grounded in dynamic systems theory. Coupling, self-organization, and attractors are the central characteristics of dynamic systems. This article reviews the first of these aspects in order to develop a fuller understanding of how embodied auditory perception occurs. It introduces the mind-body problem before reviewing dynamic systems theory and exploring the notion of coupling in human hearing by way of current and original analogies drawn from engineering. It posits that the current use of the Watt governor device as an analogy for coupling is too simplistic to account for the coupling phenomena in the human ear. In light of this review of the physiological characteristics of the peripheral auditory system, coupling in hearing appears more variegated than originally thought and accounts for the diversity of perception among individuals, a cause for individual variance in how the mind emerges, which in turn affects academic performance. Understanding the constraints and affordances of the physical ear with regard to incoming sound supports the embodied listening paradigm.
Collapse
Affiliation(s)
- Bruno Jactat
- Faculty of Humanities and Social Sciences, University of Tsukuba, Tsukuba, Ibaraki, 305-8577, Japan
| |
Collapse
|
11
|
Bell L, Peng ZE, Pausch F, Reindl V, Neuschaefer-Rube C, Fels J, Konrad K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. CHILDREN (BASEL, SWITZERLAND) 2020; 7:E219. [PMID: 33171753 PMCID: PMC7695031 DOI: 10.3390/children7110219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.
Collapse
Affiliation(s)
- Laura Bell
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
| | - Z. Ellen Peng
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
- Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA;
| | - Florian Pausch
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Vanessa Reindl
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| | - Christiane Neuschaefer-Rube
- Clinic of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany;
| | - Janina Fels
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Kerstin Konrad
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| |
Collapse
|
12
|
Pinna-Imitating Microphone Directionality Improves Sound Localization and Discrimination in Bilateral Cochlear Implant Users. Ear Hear 2020; 42:214-222. [PMID: 32701730 PMCID: PMC7757747 DOI: 10.1097/aud.0000000000000912] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES To compare the sound-source localization, discrimination, and tracking performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinna-imitating (PI) microphone directionality modes. DESIGN Twelve experienced bilateral cochlear implant users participated in the study. Their audio processors were fitted with two different programs featuring either the OMNI or PI mode. Each subject performed static and dynamic sound field spatial hearing tests in the horizontal plane. The static tests consisted of an absolute sound localization test and a minimum audible angle test, which was measured at eight azimuth directions. Dynamic sound tracking ability was evaluated by the subject correctly indicating the direction of a moving stimulus along two circular paths around the subject. RESULTS PI mode led to statistically significant sound localization and discrimination improvements. For static sound localization, the greatest benefit was a reduction in the number of front-back confusions. The front-back confusion rate was reduced from 47% with OMNI mode to 35% with PI mode (p = 0.03). The ability to discriminate sound sources straight to the sides (90° and 270° angle) was only possible with PI mode. The averaged minimum audible angle value for the 90° and 270° angle positions decreased from a 75.5° to a 37.7° angle when PI mode was used (p < 0.001). Furthermore, a non-significant trend towards an improvement in the ability to track moving sound sources was observed for both trajectories tested (p = 0.34 and p = 0.27). CONCLUSIONS Our results demonstrate that PI mode can lead to improved spatial hearing performance in bilateral cochlear implant users, mainly as a consequence of improved front-back discrimination with PI mode.
Collapse
|
13
|
Gorodensky JH, Alemu RZ, Gill SS, Sandor MT, Papsin BC, Cushing SL, Gordon KA. Binaural hearing is impaired in children with hearing loss who use bilateral hearing aids. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4352. [PMID: 31893744 DOI: 10.1121/1.5139212] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 11/11/2019] [Indexed: 06/10/2023]
Abstract
This paper asked whether children fitted with bilateral hearing aids (BHA) develop normal perception of binaural cues which are the basis of spatial hearing. Data from children with BHA (n = 26, age = 12.6 ± 2.84 years) were compared to data from a control group (n = 12, age = 12.36 ± 2.83 years). Stimuli were 250 Hz click-trains of 36 ms and a 40 ms consonant-vowel /da/ at 1 Hz presented through ER3A insert-earphones unilaterally or bilaterally. Bilateral stimuli were presented at different interaural level difference (ILD) and interaural timing difference (ITD) conditions. Participants indicated whether the sound came from the left or right side (lateralization) or whether one sound or two could be heard (binaural fusion). BHA children lateralized ILDs similarly to the control group but had impaired lateralization of ITDs. Longer response times relative to controls suggest that lateralization of ITDs was challenging for children with BHA. Most, but not all, of the BHA group were able to fuse click and speech sounds similarly to controls. Those unable to fuse showed particularly poor ITD lateralization. Results suggest that ITD perception is abnormal in children using BHAs, suggesting persistent effects of hearing loss that are not remediated by present clinical rehabilitation protocols.
Collapse
Affiliation(s)
- Jonah H Gorodensky
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Robel Z Alemu
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Simrat S Gill
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Mark T Sandor
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Blake C Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Sharon L Cushing
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Karen A Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| |
Collapse
|
14
|
Denk F, Ewert SD, Kollmeier B. On the limitations of sound localization with hearing devices. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:1732. [PMID: 31590539 DOI: 10.1121/1.5126521] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 08/28/2019] [Indexed: 06/10/2023]
Abstract
Limited abilities to localize sound sources and other reduced spatial hearing capabilities remain a largely unsolved issue in hearing devices like hearing aids or hear-through headphones. Hence, the impact of the microphone location, signal bandwidth, different equalization approaches, as well as processing delays in superposition with direct sound leaking through a vent was addressed in this study. A localization experiment was performed with normal-hearing subjects using individual binaural synthesis to separately assess the above-mentioned potential limiting issues for localization in the horizontal and vertical plane with linear hearing devices. To this end, listening through hearing devices was simulated utilizing transfer functions for six different microphone locations, measured both individually and on a dummy head. Results show that the microphone location is the governing factor for localization abilities with linear hearing devices, and non-optimal microphone locations have a disruptive influence on localization in the vertical domain, and an effect on lateral sound localization. Processing delays cause additional detrimental effects for lateral sound localization; and diffuse-field equalization to the open-ear response leads to better localization performance than free-field equalization. Stimuli derived from dummy head measurements are unsuited for evaluating individual localization abilities with a hearing device.
Collapse
Affiliation(s)
- Florian Denk
- Medizinische Physik and Cluster of Excellence "Hearing4all," Universität Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence "Hearing4all," Universität Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence "Hearing4all," Universität Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| |
Collapse
|
15
|
Pastore MT, Natale S, Yost W, Dorman MF. Head Movements Allow Listeners Bilaterally Implanted With Cochlear Implants to Resolve Front-Back Confusions. Ear Hear 2019; 39:1224-1231. [PMID: 29664750 PMCID: PMC6191386 DOI: 10.1097/aud.0000000000000581] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES We report on the ability of patients fit with bilateral cochlear implants (CIs) to distinguish the front-back location of sound sources both with and without head movements. At issue was (i) whether CI patients are more prone to front-back confusions than normal hearing listeners for wideband, high-frequency stimuli; and (ii) if CI patients can utilize dynamic binaural difference cues, in tandem with their own head rotation, to resolve these front-back confusions. Front-back confusions offer a binary metric to gain insight into CI patients' ability to localize sound sources under dynamic conditions not generally measured in laboratory settings where both the sound source and patient are static. DESIGN Three-second duration Gaussian noise samples were bandpass filtered to 2 to 8 kHz and presented from one of six loudspeaker locations located 60° apart, surrounding the listener. Perceived sound source localization for seven listeners bilaterally implanted with CIs, was tested under conditions where the patient faced forward and did not move their head and under conditions where they were encouraged to moderately rotate their head. The same conditions were repeated for 5 of the patients with one implant turned off (the implant at the better ear remained on). A control group of normal hearing listeners was also tested for a baseline of comparison. RESULTS All seven CI patients demonstrated a high rate of front-back confusions when their head was stationary (41.9%). The proportion of front-back confusions was reduced to 6.7% when these patients were allowed to rotate their head within a range of approximately ± 30°. When only one implant was turned on, listeners' localization acuity suffered greatly. In these conditions, head movement or the lack thereof made little difference to listeners' performance. CONCLUSIONS Bilateral implantation can offer CI listeners the ability to track dynamic auditory spatial difference cues and compare these changes to changes in their own head position, resulting in a reduced rate of front-back confusions. This suggests that, for these patients, estimates of auditory acuity based solely on static laboratory settings may underestimate their real-world localization abilities.
Collapse
|
16
|
Eurich B, Klenzner T, Oehler M. Impact of room acoustic parameters on speech and music perception among participants with cochlear implants. Hear Res 2019; 377:122-132. [PMID: 30933704 DOI: 10.1016/j.heares.2019.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/09/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Besides numerous other factors, listening experience with cochlear implants is substantially impaired by room acoustics. Even for persons without hearing impairment, the perception of auditory scenes, for example, concerning speech intelligibility, acoustic quality or audibility, is considerably influenced by room acoustics. For CI users, complex listening environments are usually associated with heavy losses. The aim of the present study was to determine room acoustic criteria that particularly influence speech pleasantness for CI users. DESIGN Accordingly, speech material of the Oldenburg Sentence Test (Oldenburger Satztest, OLSA) as well as basic music material (major and minor triads) were auralized using the software Auratorium which allows auralization of simulated rooms. The constructed rooms for speech stimuli were based on the standard DIN 18041:2016-03 concerning acoustic quality in rooms, the binding standard referred to by room acoustic consultants in Germany, which also includes specifications for inclusive applications in schools. For the music perception tests, two typical concert halls of different sizes were modelled. The auralized test stimuli were unilaterally presented to 10 CI users via their auxiliary input as well as to 18 participants with typical hearing via headphones (control group). Speech pleasantness was evaluated using modified MUSHRA tests. Concerning music perception, chord discrimination was tested using paired comparisons. RESULTS A strong preference of small source to listener distances by CI users was found, but no significant preference for room acoustic attenuation which exceeded the recommended for inclusive applications in schools. The analyses of the energy-time-structures suggested that a dense concentration of early reflections makes a beneficial impact on CI listeners' pleasantness ratings. Music materials were distinguished more consistently without any room acoustic impact, while any room acoustic impact led to performance close to chance level. This effect is probably due to spectral smearing effects caused by reverberation. CONCLUSIONS These results suggest that in terms of pleasantness of speech, for CI-users, source-to-listener distance is the more influential parameter than room attenuation which goes beyond the German standard recommendation. Reflections from which CI users can benefit seem to occur much earlier than those from which NH listeners benefit. Future studies on chord discrimination concerning room acoustics are needed.
Collapse
Affiliation(s)
- Bernhard Eurich
- Institute for Sound and Vibration Engineering, University of Applied Sciences Düsseldorf, Düsseldorf, Germany
| | - Thomas Klenzner
- Hörzentrum, Dept. Otorhinolaryngology, Head & Neck Surgery, University Hospital Düsseldorf, Heinrich-Heine-Universität, Düsseldorf, Germany
| | - Michael Oehler
- Music & Media Technology Department, Osnabrück University, Osnabrück, Germany.
| |
Collapse
|
17
|
Jia M, Wu Y, Bao C, Wang J. Multiple Sound Sources Localization with Frame-by-Frame Component Removal of Statistically Dominant Source. SENSORS 2018; 18:s18113613. [PMID: 30356014 PMCID: PMC6264069 DOI: 10.3390/s18113613] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 10/21/2018] [Accepted: 10/23/2018] [Indexed: 11/16/2022]
Abstract
Multiple sound sources localization is a hot topic in audio signal processing and is widely utilized in many application areas. This paper proposed a multiple sound sources localization method based on a statistically dominant source component removal (SDSCR) algorithm by soundfield microphone. The existence of the statistically weak source (SWS) among soundfield microphone signals is validated by statistical analysis. The SDSCR algorithm with joint an intra-frame and inter-frame statistically dominant source (SDS) discriminations is designed to remove the component of SDS while reserve the SWS component. The degradation of localization accuracy caused by the existence of the SWS is resolved using the SDSCR algorithm. The objective evaluation of the proposed method is conducted in simulated and real environments. The results show that the proposed method achieves a better performance compared with the conventional SSZ-based method both in sources localization and counting.
Collapse
Affiliation(s)
- Maoshen Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Yuxuan Wu
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Changchun Bao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Jing Wang
- School of Information and Electronic, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
18
|
Denk F, Ewert SD, Kollmeier B. Spectral directional cues captured by hearing device microphones in individual human ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2072. [PMID: 30404454 DOI: 10.1121/1.5056173] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/11/2018] [Indexed: 06/08/2023]
Abstract
Spatial hearing abilities with hearing devices ultimately depend on how well acoustic directional cues are captured by the microphone(s) of the device. A comprehensive objective evaluation of monaural spectral directional cues captured at 9 microphone locations integrated in 5 hearing device styles is presented, utilizing a recent database of head-related transfer functions (HRTFs) that includes data from 16 human and 3 artificial ear pairs. Differences between HRTFs to the eardrum and hearing device microphones were assessed by descriptive analyses and quantitative metrics, and compared to differences between individual ears. Directional information exploited for vertical sound localization was evaluated by means of computational models. Directional information at microphone locations inside the pinna is significantly biased and qualitatively poorer compared to locations in the ear canal; behind-the-ear microphones capture almost no directional cues. These errors are expected to impair vertical sound localization, even if the new cues would be optimally mapped to locations. Differences between HRTFs to the eardrum and hearing device microphones are qualitatively different from between-subject differences and can be described as a partial destruction rather than an alteration of relevant cues, although spectral difference metrics produce similar results. Dummy heads do not fully reflect the results with individual subjects.
Collapse
Affiliation(s)
- Florian Denk
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| |
Collapse
|
19
|
Abstract
OBJECTIVE Compared to basic-feature hearing aids, premium-feature hearing aids have more advanced technologies and sophisticated features. The objective of this study was to explore the difference between premium-feature and basic-feature hearing aids in horizontal sound localization in both laboratory and daily life environments. We hypothesized that premium-feature hearing aids would yield better localization performance than basic-feature hearing aids. DESIGN Exemplars of premium-feature and basic-feature hearing aids from two major manufacturers were evaluated. Forty-five older adults (mean age 70.3 years) with essentially symmetrical mild to moderate sensorineural hearing loss were bilaterally fitted with each of the four pairs of hearing aids. Each pair of hearing aids was worn during a 4-week field trial and then evaluated using laboratory localization tests and a standardized questionnaire. Laboratory localization tests were conducted in a sound-treated room with a 360°, 24-loudspeaker array. Test stimuli were high frequency and low frequency filtered short sentences. The localization test in quiet was designed to assess the accuracy of front/back localization, while the localization test in noise was designed to assess the accuracy of locating sound sources throughout a 360° azimuth in the horizontal plane. RESULTS Laboratory data showed that unaided localization was not significantly different from aided localization when all hearing aids were combined. Questionnaire data showed that aided localization was significantly better than unaided localization in everyday situations. Regarding the difference between premium-feature and basic-feature hearing aids, laboratory data showed that, overall, the premium-feature hearing aids yielded more accurate localization than the basic-feature hearing aids when high-frequency stimuli were used, and the listening environment was quiet. Otherwise, the premium-feature and basic-feature hearing aids yielded essentially the same performance in other laboratory tests and in daily life. The findings were consistent for both manufacturers. CONCLUSIONS Laboratory tests for two of six major manufacturers showed that premium-feature hearing aids yielded better localization performance than basic-feature hearing aids in one out of four laboratory conditions. There was no difference between the two feature levels in self-reported everyday localization. Effectiveness research with different hearing aid technologies is necessary, and more research with other manufacturers' products is needed. Furthermore, these results confirm previous observations that research findings in laboratory conditions might not translate to everyday life.
Collapse
|
20
|
Brimijoin WO, Akeroyd MA. The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location. J Am Acad Audiol 2018; 27:588-600. [PMID: 27406664 DOI: 10.3766/jaaa.15101] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. PURPOSE To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues. RESEARCH DESIGN We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. STUDY SAMPLE We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. DATA COLLECTION AND ANALYSIS Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. RESULTS Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. CONCLUSIONS Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, Glasgow, UK
| | | |
Collapse
|
21
|
Mauger SJ, Jones M, Nel E, Del Dot J. Clinical outcomes with the Kanso™ off-the-ear cochlear implant sound processor. Int J Audiol 2017; 56:267-276. [DOI: 10.1080/14992027.2016.1265156] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | | | - Esti Nel
- Cochlear Limited, Sydney, Australia
| | | |
Collapse
|
22
|
KIM HEEPYUNG, NAM KYOUNGWON, KIM JINRYOUL, YOOK SUNHYUN, JANG DONGPYO, KIM INYOUNG. EFFECT OF VARIATIONS IN MICROPHONE COVER SHAPE AND WEARING POSITION ON THE PERFORMANCE OF A HEARING-SUPPORT DEVICE MOUNTED ON EAR — SIMULATION STUDY. J MECH MED BIOL 2016. [DOI: 10.1142/s0219519416500482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Morphological and positional factors that can affect the actual performance of the hearing-support (HS) devices are utilized to support the damaged hearing ability of the sensorineural hearing-impaired persons. However, there have been few studies that demonstrated the effects of variations in such design factors on the frequency response of the device experimentally. In this study, the effect of design variations in the shape of the microphone cover on the housing and the wearing position of the device mounted on the ear on the input frequency response of the device and on the performance of an embedded beamforming algorithm were investigated using a human upper body model, a hearing aid housing model, and an acoustic environment model using computer simulation. Experimental results showed that the implemented simulator could simulate the actual acoustic situations (differences less than 5 dB in audible frequency range) and that both of the response patterns of the device and beamforming algorithm were varied in accordance with the variations in the shape of the microphone cover and the mounting position of the device on the ear. These results demonstrate the necessity of additional design and algorithm fine-tuning of each (HS) device to improve its actual speech enhancement performance.
Collapse
Affiliation(s)
- HEEPYUNG KIM
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - KYOUNG WON NAM
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - JINRYOUL KIM
- Department of Otorhinolaryngology, Samsung Medical Center, Seoul, Korea
| | - SUNHYUN YOOK
- Department of Medical Device Management & Research, Samsung Advance Institute for Health Science & Technology, Sungkyunkwan University, Seoul, Korea
| | - DONG PYO JANG
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - IN YOUNG KIM
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| |
Collapse
|
23
|
Weller T, Buchholz JM, Best V. Auditory masking of speech in reverberant multi-talker environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1303-1313. [PMID: 27036267 DOI: 10.1121/1.4944568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Auditory localization research needs to be performed in more realistic testing environments to better capture the real-world abilities of listeners and their hearing devices. However, there are significant challenges involved in controlling the audibility of relevant target signals in realistic environments. To understand the important aspects influencing target detection in more complex environments, a reverberant room with a multi-talker background was simulated and presented to the listener in a loudspeaker-based virtual sound environment. Masked thresholds of a short speech stimulus were measured adaptively for multiple target source locations in this scenario. It was found that both distance and azimuth of the target source have a strong influence on the masked threshold. Subsequently, a functional model was applied to analyze the factors influencing target detectability. The model is comprised of an auditory front-end that generates an internal representation of the stimuli in both ears, followed by a decision device combining d' information across time, frequency and both ears. The model predictions of the masked thresholds were overall in very good agreement with the experimental results. An analysis of the model processes showed that head shadow effects, signal spectrum, and reverberation have a strong impact on target audibility in the given scenario.
Collapse
Affiliation(s)
- Tobias Weller
- Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Jörg M Buchholz
- Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Virginia Best
- Boston University Hearing Research Center, Boston, Massachusetts 02215, USA
| |
Collapse
|
24
|
|
25
|
Speech Intelligibility in Noise With a Single-Unit Cochlear Implant Audio Processor. Otol Neurotol 2015; 36:1197-202. [DOI: 10.1097/mao.0000000000000775] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Xu J, Han W. Improvement of Adult BTE Hearing Aid Wearers' Front/Back Localization Performance Using Digital Pinna-Cue Preserving Technologies: An Evidence-Based Review. KOREAN JOURNAL OF AUDIOLOGY 2014; 18:97-104. [PMID: 25558403 PMCID: PMC4280762 DOI: 10.7874/kja.2014.18.3.97] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2014] [Revised: 10/30/2014] [Accepted: 10/30/2014] [Indexed: 11/29/2022]
Abstract
This systematic review evaluated the impact of using digital pinna-cue preserving technologies (PPT) on front/back sound localization for adult hearing aid users. Two peer-reviewed studies and two non-peer-reviewed studies were included. Lab-based and self-report outcomes were both assessed. The overall findings suggested that PPT was superior to omni-directional and full directional settings in a relatively quiet, well-controlled laboratory environment but not in the real world. However, observed individual differences in self-report measures suggested that PPT was potentially beneficial to certain hearing aid users. PPT candidacy was discussed and the importance of a pre-fitting interview/consultation was emphasized to assist clinicians in making a solid evidence-based and cost-effectiveness decision when prescribing hearing aids to adults with hearing impairment.
Collapse
Affiliation(s)
- Jingjing Xu
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Woojae Han
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea
| |
Collapse
|
27
|
Kuk F, Korhonen P, Lau C, Keenan D, Norgaard M. Evaluation of a pinna compensation algorithm for sound localization and speech perception in noise. Am J Audiol 2014; 22:84-93. [PMID: 23275583 DOI: 10.1044/1059-0889(2012/12-0043)] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study was designed to evaluate the effect of a pinna compensation (PC) algorithm on localization performance in the horizontal plane and speech intelligibility in noise. METHOD Nine and 18 experienced hearing aid users with bilaterally symmetrical sensorineural hearing loss participated in the localization study and the speech-in-noise study, respectively. Performance was evaluated unaided, aided with a behind-the-ear (BTE) hearing aid with an omnidirectional microphone (Omni), and aided with the same hearing aid with the PC algorithm (Omni+PC). Localization performance was measured using 12 loudspeakers spaced 30° apart on a horizontal plane. Speech-in-noise performance was measured with speech presented from 0° or 180°. A single-blinded, repeated measures design was used. RESULTS Significant improvement in localization accuracy was found when comparing the Omni+PC condition to the Omni condition. Also, the Omni+PC condition improved the signal-to-noise ratio by 2.4 dB when compared to the Omni condition when speech was presented from the front in a diffuse noise background. CONCLUSION Use of the PC algorithm improved localization on the horizontal plane and speech-in-noise performance. These results support use of the PC algorithm in BTE hearing aid fittings.
Collapse
|
28
|
Jensen NS, Neher T, Laugesen S, Johannesson RB, Kragelund L. Laboratory and field study of the potential benefits of pinna cue-preserving hearing aids. Trends Amplif 2013; 17:171-88. [PMID: 24216771 DOI: 10.1177/1084713813510977] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The potential benefits of preserving high-frequency spectral cues created by the pinna in hearing-aid fittings were investigated in a combined laboratory and field test. In a single-blind crossover design, two settings of an experimental hearing aid were compared. One setting was characterized by a pinna cue-preserving microphone position, whereas the other was characterized by a microphone position not preserving pinna cues. Participants were allowed 1 month of acclimatization to each setting before measurements of localization and spatial release from speech-on-speech masking were completed in the laboratory. Real-world experience with the two settings was assessed by means of questionnaires. Seventeen participants with mild to moderate sensorineural hearing impairments completed the study. An inconsistent pinna cue benefit pattern was observed across the outcome measures. In the localization test, the pinna cue-preserving setting provided a significant mean reduction of 22° in the root mean square (RMS) error in the front-back dimension, with 13 of the 17 participants showing a reduction of at least 15°. No significant mean difference in RMS error between settings was observed in the left-right dimension. No significant differences between settings were observed in the spatial-unmasking test conditions. The questionnaire data indicated a small, but nonsignificant, benefit of the pinna cue-preserving setting in certain real-life situations, which corresponded with a general preference for that setting. No significant real-life localization benefit was observed. The results suggest that preserving pinna cues can offer benefit in some conditions for individual hearing-aid users with mild to moderate hearing loss and is unlikely to harm performances for the rest.
Collapse
|
29
|
Pavlidi D, Griffin A, Puigt M, Mouchtaris A. Real-Time Multiple Sound Source Localization and Counting Using a Circular Microphone Array. ACTA ACUST UNITED AC 2013. [DOI: 10.1109/tasl.2013.2272524] [Citation(s) in RCA: 150] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
30
|
Ibrahim I, Parsa V, Macpherson E, Cheesman M. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology. Audiol Res 2012; 3:e1. [PMID: 26557339 PMCID: PMC4627128 DOI: 10.4081/audiores.2013.e1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 10/15/2012] [Accepted: 11/19/2012] [Indexed: 11/23/2022] Open
Abstract
Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.
Collapse
Affiliation(s)
- Iman Ibrahim
- Faculty of Health Sciences, Western University , London, Canada
| | - Vijay Parsa
- National Centre for Audiology, Western University , London, Canada
| | - Ewan Macpherson
- National Centre for Audiology, Western University , London, Canada
| | | |
Collapse
|
31
|
Mueller MF, Kegel A, Schimmel SM, Dillier N, Hofbauer M. Localization of virtual sound sources with bilateral hearing aids in realistic acoustical scenes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:4732-4742. [PMID: 22712946 DOI: 10.1121/1.4705292] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Sound localization with hearing aids has traditionally been investigated in artificial laboratory settings. These settings are not representative of environments in which hearing aids are used. With individual Head-Related Transfer Functions (HRTFs) and room simulations, realistic environments can be reproduced and the performance of hearing aid algorithms can be evaluated. In this study, four different environments with background noise have been implemented in which listeners had to localize different sound sources. The HRTFs were measured inside the ear canals of the test subjects and by the microphones of Behind-The-Ear (BTEs) hearing aids. In the first experiment the system for virtual acoustics was evaluated by comparing perceptual sound localization results for the four scenes in a real room with a simulated one. In the second experiment, sound localization with three BTE algorithms, an omnidirectional microphone, a monaural cardioid-shaped beamformer and a monaural noise canceler, was examined. The results showed that the system for generating virtual environments is a reliable tool to evaluate sound localization with hearing aids. With BTE hearing aids localization performance decreased and the number of front-back confusions was at chance level. The beamformer, due to its directivity characteristics, allowed the listener to resolve the front-back ambiguity.
Collapse
Affiliation(s)
- Martin F Mueller
- Laboratory for Experimental Audiology, University Hospital Zurich, Zurich 8091, Switzerland.
| | | | | | | | | |
Collapse
|