1
|
Ikeda K, Campbell TA. Binaural interaction in human auditory brainstem and middle-latency responses affected by sound frequency band, lateralization predictability, and attended modality. Hear Res 2024; 452:109089. [PMID: 39137721 DOI: 10.1016/j.heares.2024.109089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 07/11/2024] [Accepted: 07/16/2024] [Indexed: 08/15/2024]
Abstract
The binaural interaction component (BIC) of the auditory evoked potential is the difference between the waveforms of the binaural response and the sum of left and right monaural responses. This investigation examined BICs of the auditory brainstem (ABR) and middle-latency (MLR) responses concerning three objectives: 1) the level of the auditory system at which low-frequency dominance in BIC amplitudes begins when the binaural temporal fine structure is more influential with lower- than higher-frequency content; 2) how BICs vary as a function of frequency and lateralization predictability, as could relate to the improved lateralization of high-frequency sounds; 3) how attention affects BICs. Sixteen right-handed participants were presented with either low-passed (< 1000 Hz) or high-passed (> 2000 Hz) clicks at 30 dB SL with a 38 dB (A) masking noise, at a stimulus onset asynchrony of 180 ms. Further, this repeated-measures design manipulated stimulus presentation (binaural, left monaural, right monaural), lateralization predictability (unpredictable, predictable), and attended modality (either auditory or visual). For the objectives, respectively, the results were: 1) whereas low-frequency dominance in BIC amplitudes began during, and continued after, the Na-BIC, binaural (center) as well as summed monaural (left and right) amplitudes revealed low-frequency dominance only after the Na wave; 2) with a predictable position that was fixed, no BIC exhibited equivalent amplitudes between low- and high-passed clicks; 3) whether clicks were low- or high-passed, selective attention affected the ABR-BIC yet not MLR-BICs. These findings indicate that low-frequency dominance in lateralization begins at the Na latency, being independent of the efferent cortico-collicular pathway's influence.
Collapse
Affiliation(s)
- Kazunari Ikeda
- Laboratory of Cognitive Psychophysiology, Tokyo Gakugei University, Koganei, Tokyo 184-8501, Japan.
| | - Tom A Campbell
- Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland
| |
Collapse
|
2
|
Fink N, Levitas R, Eisenkraft A, Wagnert-Avraham L, Gertz SD, Fostick L. Perforated Concave Earplug (pCEP): A Proof-of-Concept Earplug to Improve Sound Localization without Compromising Noise Attenuation. SENSORS (BASEL, SWITZERLAND) 2023; 23:7410. [PMID: 37687865 PMCID: PMC10490414 DOI: 10.3390/s23177410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/16/2023] [Accepted: 08/23/2023] [Indexed: 09/10/2023]
Abstract
Combat soldiers are currently faced with using a hearing-protection device (HPD) at the cost of adequately detecting critical signals impacting mission success. The current study tested the performance of the Perforated-Concave-Earplug (pCEP), a proof-of-concept passive HPD consisting of a concave bowl-like rigid structure attached to a commercial roll-down earplug, designed to improve sound localization with minimal compromising of noise attenuation. Primarily intended for combat/military training settings, our aim was an evaluation of localization of relevant sound sources (single/multiple gunfire, continuous noise, spoken word) compared to 3M™-Combat-Arms™4.1 earplugs in open-mode and 3M™-E-A-R™-Classic™ earplugs. Ninety normal-hearing participants, aged 20-35 years, were asked to localize stimuli delivered from monitors evenly distributed around them in no-HPD and with-HPD conditions. The results showed (1) localization abilities worsened using HPDs; (2) the spoken word was localized less accurately than other stimuli; (3) mean root mean square errors (RMSEs) were largest for stimuli emanating from rear monitors; and (4) localization abilities corresponded to HPD attenuation levels (largest attenuation and mean RMSE: 3M™-E-A-R™-Classic™; smallest attenuation and mean RMSE: 3M™-Combat-Arms™4.1; pCEP was mid-range on both). These findings suggest that the pCEP may benefit in military settings by providing improved sound localization relative to 3M™ E-A-R™-Classic™ and higher attenuation relative to 3M™-Combat Arms™-4.1, recommending its use in noisy environments.
Collapse
Affiliation(s)
- Nir Fink
- Department of Communication Disorders, Acoustics and Noise Research Lab in the Name of Laurent Levy, Ariel University, Ariel 40700, Israel
- Israel Defense Forces Medical Corps, Hakirya 6473424, Israel;
| | - Rachel Levitas
- Israel Defense Forces Medical Corps, Hakirya 6473424, Israel;
| | - Arik Eisenkraft
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
| | - Linn Wagnert-Avraham
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
| | - S. David Gertz
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
- The Saul and Joyce Brandman Hub for Cardiovascular Research and the Department of Medical Neurobiology, Institute for Medical Research (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel 40700, Israel;
| |
Collapse
|
3
|
Guérineau C, Lõoke M, Broseghini A, Dehesh G, Mongillo P, Marinelli L. Sound Localization Ability in Dogs. Vet Sci 2022; 9:619. [PMID: 36356096 PMCID: PMC9694642 DOI: 10.3390/vetsci9110619] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 09/07/2024] Open
Abstract
The minimum audible angle (MAA), defined as the smallest detectable difference between the azimuths of two identical sources of sound, is a standard measure of spatial auditory acuity in animals. Few studies have explored the MAA of dogs, using methods that do not allow potential improvement throughout the assessment, and with a very small number of dog(s) assessed. To overcome these limits, we adopted a staircase method on 10 dogs, using a two-forced choice procedure with two sound sources, testing angles of separation from 60° to 1°. The staircase method permits the level of difficulty for each dog to be continuously adapted and allows for the observation of improvement over time. The dogs' average MAA was 7.6°, although with a large interindividual variability, ranging from 1.3° to 13.2°. A global improvement was observed across the procedure, substantiated by a gradual lowering of the MAA and of choice latency across sessions. The results indicate that the staircase method is feasible and reliable in the assessment of auditory spatial localization in dogs, highlighting the importance of using an appropriate method in a sensory discrimination task, so as to allow improvement over time. The results also reveal that the MAA of dogs is more variable than previously reported, potentially reaching values lower than 2°. Although no clear patterns of association emerged between MAA and dogs' characteristics such as ear shape, head shape or age, the results suggest the value of conducting larger-scale studies to determine whether these or other factors influence sound localization abilities in dogs.
Collapse
Affiliation(s)
- Cécile Guérineau
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Miina Lõoke
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Anna Broseghini
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Giulio Dehesh
- Independent Researcher, Via Chiesanuova 139, 35136 Padova, PD, Italy
| | - Paolo Mongillo
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Lieta Marinelli
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| |
Collapse
|
4
|
Zheng Y, Swanson J, Koehnke J, Guan J. Sound Localization of Listeners With Normal Hearing, Impaired Hearing, Hearing Aids, Bone-Anchored Hearing Instruments, and Cochlear Implants: A Review. Am J Audiol 2022; 31:819-834. [PMID: 35917460 DOI: 10.1044/2022_aja-22-00006] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This review article reviews the contemporary studies of localization ability for different populations in different listening environments and provides possible future research directions. CONCLUSIONS The ability to accurately localize a sound source relying on three cues (interaural time difference, interaural level difference, and spectral cues) is important for communication, learning, and safety. Confounding effects including noise and reverberation, which exist in common listening environments, mask or alter localization cues and negatively affect localization performance. Hearing loss, a common public health issue, also affects localization accuracy. Although hearing devices have been developed to provide excellent audibility of speech signals, less attention has been paid to preserving and replicating crucial localization cues. Unique challenges are faced by users of various hearing devices, including hearing aids, bone-anchored hearing instruments, and cochlear implants. Hearing aids have failed to consistently improve localization performance and, in some cases, significantly impair sound localization. Bone-conduction hearing instruments show little to no benefit for sound localization performance in most cases, although some improvement is seen in binaural users. Although cochlear implants provide great hearing benefit to individuals with severe-to-profound sensorineural hearing loss, cochlear implant users have significant difficulty localizing sound, even with two implants. However, technologies in each of these areas are advancing to reduce interference with desired sound signals and preserve localization cues to help users achieve better hearing and sound localization in real-life environments.
Collapse
Affiliation(s)
- Yunfang Zheng
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| | - Jacob Swanson
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| | - Janet Koehnke
- Department of Communication Sciences and Disorders, Montclair State University, Bloomfield, NJ
| | - Jianwei Guan
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| |
Collapse
|
5
|
The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors. Ear Hear 2022; 43:1262-1272. [PMID: 34882619 PMCID: PMC9174346 DOI: 10.1097/aud.0000000000001179] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance. DESIGN Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues. RESULTS There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21). CONCLUSIONS Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.
Collapse
|
6
|
Situational Awareness: The Effect of Stimulus Type and Hearing Protection on Sound Localization. SENSORS 2021; 21:s21217044. [PMID: 34770351 PMCID: PMC8587889 DOI: 10.3390/s21217044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 10/11/2021] [Accepted: 10/20/2021] [Indexed: 11/16/2022]
Abstract
The purpose of the current study was to test sound localization of a spoken word, rarely studied in the context of localization, compared to pink noise and a gunshot, while taking into account the source position and the effect of different hearing protection devices (HPDs) used by the listener. Ninety participants were divided into three groups using different HPDs. Participants were tested twice, under with- and no-HPD conditions, and were requested to localize the different stimuli that were delivered from one of eight speakers evenly distributed around them (starting from 22.5°). Localization of the word stimulus was more difficult than that of the other stimuli. HPD usage resulted in a larger mean root-mean-square error (RMSE) and increased mirror image reversal errors for all stimuli. In addition, HPD usage increased the mean RMSE and mirror image reversal errors for stimuli delivered from the front and back, more than for stimuli delivered from the left and right. HPDs affect localization, both due to attenuation and to limitation of pinnae cues when using earmuffs. Difficulty localizing the spoken word should be considered when assessing auditory functionality and should be further investigated to include HPDs with different attenuation spectra and levels, and to further types of speech stimuli.
Collapse
|
7
|
Körtje M, Baumann U, Stöver T, Weissgerber T. Sensitivity to interaural time differences and localization accuracy in cochlear implant users with combined electric-acoustic stimulation. PLoS One 2020; 15:e0241015. [PMID: 33075114 PMCID: PMC7571672 DOI: 10.1371/journal.pone.0241015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 10/07/2020] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES In this study, localization accuracy and sensitivity to acoustic interaural time differences (ITDs) in subjects using cochlear implants with combined electric-acoustic stimulation (EAS) were assessed and compared with the results of a normal hearing control group. METHODS Eight CI users with EAS (2 bilaterally implanted, 6 unilaterally implanted) and symmetric binaural acoustic hearing and 24 normal hearing subjects participated in the study. The first experiment determined mean localization error (MLE) for different angles of sound incidence between ± 60° (frontal and dorsal presentation). The stimuli were either low-pass, high-pass or broadband noise bursts. In a second experiment, just noticeable differences (JND) of ITDs were measured for pure tones of 125 Hz, 250 Hz and 500 Hz (headphone presentation). RESULTS Experiment 1: MLE of EAS subjects was 8.5°, 14.3° and 14.7°, (low-, high-pass and broadband stimuli respectively). In the control group, MLE was 1.8° (broadband stimuli). In the differentiation between sound incidence from front and back, EAS subjects performed on chance level. Experiment 2: The JND-ITDs were 88.7 μs for 125 Hz, 48.8 μs for 250 Hz and 52.9 μs for 500 Hz (EAS subjects). Compared to the control group, JND-ITD for 125 Hz was on the same level of performance. No statistically significant correlation was found between MLE and JND-ITD in the EAS cohort. CONCLUSIONS Near to normal ITD sensitivity in the lower frequency acoustic hearing was demonstrated in a cohort of EAS users. However, in an acoustic localization task, the majority of the subjects did not reached the level of accuracy of normal hearing. Presumably, signal processing time delay differences between devices used on both sides are deteriorating the transfer of precise binaural timing cues.
Collapse
Affiliation(s)
- Monika Körtje
- Audiological Acoustics, ENT Department, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Uwe Baumann
- Audiological Acoustics, ENT Department, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Timo Stöver
- ENT Department, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Tobias Weissgerber
- Audiological Acoustics, ENT Department, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
8
|
Yost WA, Pastore MT, Dorman MF. Sound source localization is a multisystem process. ACOUSTICAL SCIENCE AND TECHNOLOGY 2020; 41:113-120. [PMID: 34305431 PMCID: PMC8297655 DOI: 10.1250/ast.41.113] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.
Collapse
Affiliation(s)
- William A. Yost
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - M. Torben Pastore
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - Michael F. Dorman
- Cochlear Implant Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| |
Collapse
|
9
|
Risoud M, Hanson JN, Gauvrit F, Renard C, Lemesre PE, Bonne NX, Vincent C. Sound source localization. Eur Ann Otorhinolaryngol Head Neck Dis 2018; 135:259-264. [PMID: 29731298 DOI: 10.1016/j.anorl.2018.04.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Sound source localization is paramount for comfort of life, determining the position of a sound source in 3 dimensions: azimuth, height and distance. It is based on 3 types of cue: 2 binaural (interaural time difference and interaural level difference) and 1 monaural spectral cue (head-related transfer function). These are complementary and vary according to the acoustic characteristics of the incident sound. The objective of this report is to update the current state of knowledge on the physical basis of spatial sound localization.
Collapse
Affiliation(s)
- M Risoud
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France.
| | - J-N Hanson
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - F Gauvrit
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - C Renard
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - P-E Lemesre
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - N-X Bonne
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| | - C Vincent
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| |
Collapse
|