1
|
Mazo C, Baeta M, Petreanu L. Auditory cortex conveys non-topographic sound localization signals to visual cortex. Nat Commun 2024; 15:3116. [PMID: 38600132 PMCID: PMC11006897 DOI: 10.1038/s41467-024-47546-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.
Collapse
Affiliation(s)
- Camille Mazo
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| | - Margarida Baeta
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Leopoldo Petreanu
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| |
Collapse
|
2
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
3
|
Lladó P, Hyvärinen P, Pulkki V. The impact of head-worn devices in an auditory-aided visual search task. J Acoust Soc Am 2024; 155:2460-2469. [PMID: 38578178 DOI: 10.1121/10.0025542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.
Collapse
Affiliation(s)
- Pedro Lladó
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Petteri Hyvärinen
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Ville Pulkki
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| |
Collapse
|
4
|
Li JY, Wang NY, Wang X, Li BN, Nie S, Li H, Zhang J. [Horizontal sound localization in presence of noise in normal-hearing young adults]. Zhonghua Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2024; 59:204-211. [PMID: 38561257 DOI: 10.3760/cma.j.cn115330-20231010-00132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Objective: This study investigates the effect of signal-to-noise ratio (SNR), frequency, and bandwidth on horizontal sound localization accuracy in normal-hearing young adults. Methods: From August 2022 to December 2022, a total of 20 normal-hearing young adults, including 7 males and 13 females, with an age range of 20 to 35 years and a mean age of 25.4 years, were selected to participate in horizontal azimuth recognition tests under both quiet and noisy conditions. Six narrowband filtered noise stimuli were used with central frequencies (CF) of 250, 2 000, and 4 000 Hz and bandwidths of 1/6 and 1 octave. Continuous broadband white noise was used as the background masker, and the signal-to-noise ratio (SNR) was 0, -3, and -12 dB. The root-mean-square error (RMS error) was used to measure sound localization accuracy, with smaller values indicating higher accuracy. Friedman test was used to compare the effects of SNR and CF on sound localization accuracy, and Wilcoxon signed-rank test was used to compare the impact of the two bandwidths on sound localization accuracy in noise. Results: In a quiet environment, the RMS error in horizontal azimuth in normal-hearing young adults ranged from 4.3 to 8.1 degrees. Sound localization accuracy decreased with decreasing SNR: at 0 dB SNR (range: 5.3-12.9 degrees), the difference from the quiet condition was not significant (P>0.05); however, at -3 dB (range: 7.3-16.8 degrees) and -12 dB SNR (range: 9.4-41.2 degrees), sound localization accuracy significantly decreased compared to the quiet condition (all P<0.01). Under noisy conditions, there were differences in sound localization accuracy among stimuli with different frequencies and bandwidths, with higher frequencies performing the worst, followed by middle frequencies, and lower frequencies performing the best, with significant differences (all P<0.01). Sound localization accuracy for 1/6 octave stimuli was more susceptible to noise interference than 1 octave stimuli (all P<0.01). Conclusions: The ability of normal-hearing young adults to localize sound in the horizontal plane in the presence of noise is influenced by SNR, CF, and bandwidth. Noise with SNRs of ≥-3 dB can lead to decreased accuracy in narrowband sound localization. Higher CF signals and narrower bandwidths are more susceptible to noise interference.
Collapse
Affiliation(s)
- J Y Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - N Y Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - X Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - B N Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - S Nie
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - H Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| | - J Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital of Capital Medical University, Beijing 100020, China
| |
Collapse
|
5
|
Zelger P, Zorowka P, Schmutzhard J, Galvan O, Rossi S, Stephan K, Seebacher J. Localization of Low- and High-Frequency Sounds in Cochlear Implant Recipients Using a Contralateral Hearing Aid. Otol Neurotol 2024; 45:e228-e233. [PMID: 38238908 DOI: 10.1097/mao.0000000000004090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
BACKGROUND AND OBJECTIVES The ability to localize sounds is partly recovered in patients using a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral side. Binaural processing seems effective at least to some extent, despite the difference between electric and acoustic stimulation in each ear. To obtain further insights into the mechanisms of binaural hearing in these listeners, localization of low- and high-frequency sounds was tested. STUDY DESIGN The study used a within-subject design, where participants were tasked with localizing sound sources in the horizontal plane. The experiment was conducted in an anechoic chamber, where an array of seven loudspeakers was mounted along the 24 azimuthal angle span from -90° to +90°. Stimuli were applied with different frequencies: broadband noise and high- and low-frequency noise. SUBJECTS Ten CI recipients participated in the study. All had an asymmetric hearing loss with a CI in the poorer ear and an HA on the contralateral side. MAIN OUTCOME MEASURES Accuracy of sound localization in terms of angular error and percentage of correct localization scores. RESULTS The median angular error was 40° in bimodal conditions for both broadband noise and high-frequency noise stimuli. The angular error increased to 47° for low-frequency noise stimuli. In the unilaterally aided condition with an HA, only a median angular error of 78° was observed. CONCLUSIONS Irrespective of the frequency composition of the stimuli, this group of bimodal listeners showed some ability to localize sounds. Angular errors were larger than those reported in the literature for bilateral CI users or single-sided deaf listeners with a CI. In the unilateral listening condition with HA, only localization of sounds was not possible for most subjects.
Collapse
Affiliation(s)
| | | | - Joachim Schmutzhard
- Department of Otorhinolaryngology, Medical University Innsbruck, Innsbruck, Austria
| | | | - Sonja Rossi
- Department for Hearing, Speech and Voice Disorders
| | - Kurt Stephan
- Department for Hearing, Speech and Voice Disorders
| | | |
Collapse
|
6
|
Bae A, Peña JL. Barn owls specialized sound-driven behavior: Lessons in optimal processing and coding by the auditory system. Hear Res 2024; 443:108952. [PMID: 38242019 DOI: 10.1016/j.heares.2024.108952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/03/2024] [Accepted: 01/11/2024] [Indexed: 01/21/2024]
Abstract
The barn owl, a nocturnal raptor with remarkably efficient prey-capturing abilities, has been one of the initial animal models used for research of brain mechanisms underlying sound localization. Some seminal findings made from their specialized sound localizing auditory system include discoveries of a midbrain map of auditory space, mechanisms towards spatial cue detection underlying sound-driven orienting behavior, and circuit level changes supporting development and experience-dependent plasticity. These findings have explained properties of vital hearing functions and inspired theories in spatial hearing that extend across diverse animal species, thereby cementing the barn owl's legacy as a powerful experimental system for elucidating fundamental brain mechanisms. This concise review will provide an overview of the insights from which the barn owl model system has exemplified the strength of investigating diversity and similarity of brain mechanisms across species. First, we discuss some of the key findings in the specialized system of the barn owl that elucidated brain mechanisms toward detection of auditory cues for spatial hearing. Then we examine how the barn owl has validated mathematical computations and theories underlying optimal hearing across species. And lastly, we conclude with how the barn owl has advanced investigations toward developmental and experience dependent plasticity in sound localization, as well as avenues for future research investigations towards bridging commonalities across species. Analogous to the informative power of Astrophysics for understanding nature through diverse exploration of planets, stars, and galaxies across the universe, miscellaneous research across different animal species pursues broad understanding of natural brain mechanisms and behavior.
Collapse
Affiliation(s)
- Andrea Bae
- Albert Einstein College of Medicine, NY, USA
| | - Jose L Peña
- Albert Einstein College of Medicine, NY, USA.
| |
Collapse
|
7
|
Lively S, Agrawal S, Stewart M, Dwyer RT, Strobel L, Marcinkevich P, Hetlinger C, Croce J. CROS or hearing aid? Selecting the ideal solution for unilateral CI patients with limited aidable hearing in the contralateral ear. PLoS One 2024; 19:e0293811. [PMID: 38394286 PMCID: PMC10890777 DOI: 10.1371/journal.pone.0293811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/19/2023] [Indexed: 02/25/2024] Open
Abstract
A hearing aid or a contralateral routing of signal device are options for unilateral cochlear implant listeners with limited hearing in the unimplanted ear; however, it is uncertain which device provides greater benefit beyond unilateral listening alone. Eighteen unilateral cochlear implant listeners participated in this prospective, within-participants, repeated measures study. Participants were tested with the cochlear implant alone, cochlear implant + hearing aid, and cochlear implant + contralateral routing of signal device configurations with a one-month take-home period between each in-person visit. Audiograms, speech perception in noise, and lateralization were evaluated. Subjective feedback was obtained via questionnaires. Marked improvement in speech in noise and non-implanted ear lateralization accuracy were observed with the addition of a contralateral hearing aid. There were no significant differences in speech recognition between listening configurations. However, the chronic device use questionnaires and the final device selection showed a clear preference for the hearing aid in spatial awareness and communication domains. Individuals with limited hearing in their unimplanted ears demonstrate significant improvement with the addition of a contralateral device. Subjective questionnaires somewhat contrast with clinic-based outcome measures, highlighting the delicate decision-making process involved in clinically advising one device or another to maximize communication benefits.
Collapse
Affiliation(s)
- Sarah Lively
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Smita Agrawal
- Collaborative Research Group, Clinical Research, Advanced Bionics, Valencia, CA, United States of America
| | - Matthew Stewart
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Robert T. Dwyer
- Collaborative Research Group, Clinical Research, Advanced Bionics, Valencia, CA, United States of America
| | - Laura Strobel
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Paula Marcinkevich
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| | - Chris Hetlinger
- Research and Technology Group, Advanced Bionics, Valencia, CA, United States of America
| | - Julia Croce
- Department of Otolaryngology, Thomas Jefferson University Hospital, Philadelphia, PA, United States of America
| |
Collapse
|
8
|
Liu H, Bai Y, Xu Z, Liu J, Ni G, Ming D. The scalp time-varying network of auditory spatial attention in "cocktail-party" situations. Hear Res 2024; 442:108946. [PMID: 38150794 DOI: 10.1016/j.heares.2023.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 12/29/2023]
Abstract
Sound source localization in "cocktail-party" situations is a remarkable ability of the human auditory system. However, the neural mechanisms underlying auditory spatial attention are still largely unknown. In this study, the "cocktail-party" situations are simulated through multiple sound sources and presented through head-related transfer functions and headphones. Furthermore, the scalp time-varying network of auditory spatial attention is constructed using the high-temporal resolution electroencephalogram, and its network properties are measured quantitatively using graph theory analysis. The results show that the time-varying network of auditory spatial attention in "cocktail-party" situations is more complex and partially different than in simple acoustic situations, especially in the early- and middle-latency periods. The network coupling strength increases continuously over time, and the network hub shifts from the posterior temporal lobe to the parietal lobe and then to the frontal lobe region. In addition, the right hemisphere has a stronger network strength for processing auditory spatial information in "cocktail-party" situations, i.e., the right hemisphere has higher clustering levels, higher transmission efficiency, and more node degrees during the early- and middle-latency periods, while this phenomenon disappears and appears symmetrically during the late-latency period. These findings reveal different network patterns and properties of auditory spatial attention in "cocktail-party" situations during different periods and demonstrate the dominance of the right hemisphere in the dynamic processing of auditory spatial information.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Jihan Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| |
Collapse
|
9
|
Valzolgher C, Capra S, Gessa E, Rosi T, Giovanelli E, Pavani F. Sound localization in noisy contexts: performance, metacognitive evaluations and head movements. Cogn Res Princ Implic 2024; 9:4. [PMID: 38191869 PMCID: PMC10774233 DOI: 10.1186/s41235-023-00530-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/26/2023] [Indexed: 01/10/2024] Open
Abstract
Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Tommaso Rosi
- Department of Physics, University of Trento, Trento, Italy
| | - Elena Giovanelli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
10
|
Barot P, Mombaur K, MacDonald EN. Estimating speaker direction on a humanoid robot with binaural acoustic signals. PLoS One 2024; 19:e0296452. [PMID: 38165991 PMCID: PMC10760655 DOI: 10.1371/journal.pone.0296452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 12/11/2023] [Indexed: 01/04/2024] Open
Abstract
To achieve human-like behaviour during speech interactions, it is necessary for a humanoid robot to estimate the location of a human talker. Here, we present a method to optimize the parameters used for the direction of arrival (DOA) estimation, while also considering real-time applications for human-robot interaction scenarios. This method is applied to binaural sound source localization framework on a humanoid robotic head. Real data is collected and annotated for this work. Optimizations are performed via a brute force method and a Bayesian model based method, results are validated and discussed, and effects on latency for real-time use are also explored.
Collapse
Affiliation(s)
- Pranav Barot
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Katja Mombaur
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
- Karlsruhe Institute of Technology (KIT), Institute of Anthropomatics and Robotics (IAR), Optimization and Biomechanics for Human-Centred Robotics, Karlsruhe, Germany
| | - Ewen N. MacDonald
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
11
|
Alemu RZ, Papsin BC, Harrison RV, Blakeman A, Gordon KA. Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization. Trends Hear 2024; 28:23312165231217910. [PMID: 38297817 PMCID: PMC10832417 DOI: 10.1177/23312165231217910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 10/17/2023] [Accepted: 11/14/2023] [Indexed: 02/02/2024] Open
Abstract
The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.
Collapse
Affiliation(s)
- Robel Z. Alemu
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
| | - Blake C. Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Robert V. Harrison
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Al Blakeman
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A. Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
- Department of Communication Disorders, The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
12
|
Best V, Roverud E. Externalization of Speech When Listening With Hearing Aids. Trends Hear 2024; 28:23312165241229572. [PMID: 38347733 PMCID: PMC10865954 DOI: 10.1177/23312165241229572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/08/2024] [Accepted: 01/15/2024] [Indexed: 02/15/2024] Open
Abstract
Subjective reports indicate that hearing aids can disrupt sound externalization and/or reduce the perceived distance of sounds. Here we conducted an experiment to explore this phenomenon and to quantify how frequently it occurs for different hearing-aid styles. Of particular interest were the effects of microphone position (behind the ear vs. in the ear) and dome type (closed vs. open). Participants were young adults with normal hearing or with bilateral hearing loss, who were fitted with hearing aids that allowed variations in the microphone position and the dome type. They were seated in a large sound-treated booth and presented with monosyllabic words from loudspeakers at a distance of 1.5 m. Their task was to rate the perceived externalization of each word using a rating scale that ranged from 10 (at the loudspeaker in front) to 0 (in the head) to -10 (behind the listener). On average, compared to unaided listening, hearing aids tended to reduce perceived distance and lead to more in-the-head responses. This was especially true for closed domes in combination with behind-the-ear microphones. The behavioral data along with acoustical recordings made in the ear canals of a manikin suggest that increased low-frequency ear-canal levels (with closed domes) and ambiguous spatial cues (with behind-the-ear microphones) may both contribute to breakdowns of externalization.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA 02215, USA
| | - Elin Roverud
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|
13
|
Ramírez M, Arend JM, von Gablenz P, Liesefeld HR, Pörschmann C. Toward Sound Localization Testing in Virtual Reality to Aid in the Screening of Auditory Processing Disorders. Trends Hear 2024; 28:23312165241235463. [PMID: 38425297 PMCID: PMC10908240 DOI: 10.1177/23312165241235463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 02/08/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024] Open
Abstract
Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.
Collapse
Affiliation(s)
- Melissa Ramírez
- Institute of Computer and Communication Technology, TH Köln University of Applied Sciences, Cologne, Germany
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Johannes M. Arend
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Petra von Gablenz
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences and Cluster of Excellence ‘Hearing4all’, Oldenburg, Germany
| | | | - Christoph Pörschmann
- Institute of Computer and Communication Technology, TH Köln University of Applied Sciences, Cologne, Germany
| |
Collapse
|
14
|
Day ML. Head-related transfer functions of rabbits within the front horizontal plane. Hear Res 2024; 441:108924. [PMID: 38061267 PMCID: PMC10872353 DOI: 10.1016/j.heares.2023.108924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/10/2023] [Accepted: 11/22/2023] [Indexed: 12/19/2023]
Abstract
The head-related transfer function (HRTF) describes the direction-dependent acoustic filtering by the head that occurs between a source signal in free-field space and the signal at the tympanic membrane. HRTFs contain information on sound source location via interaural differences of their magnitude or phase spectra and via the shapes of their magnitude spectra. The present study characterized HRTFs for source locations in the front horizontal plane for nine rabbits, which are a species commonly used in studies of the central auditory system. HRTF magnitude spectra shared several features across individuals, including a broad spectral peak at 2.6kHz that increased gain by 12 to 23dB depending on source azimuth; and a notch at 7.6kHz and peak at 9.8kHz visible for most azimuths. Overall, frequencies above 4kHz were amplified for sources ipsilateral to the ear and progressively attenuated for frontal and contralateral azimuths. The slope of the magnitude spectrum between 3 and 5kHz was found to be an unambiguous monaural cue for source azimuths ipsilateral to the ear. Average interaural level difference (ILD) between 5 and 16kHz varied monotonically with azimuth over ±31dB despite a relatively small head size. Interaural time differences (ITDs) at 0.5kHz and 1.5kHz also varied monotonically with azimuth over ±358 μs and ±260 μs, respectively. Remeasurement of HRTFs after pinna removal revealed that the large pinnae of rabbits were responsible for all spectral peaks and notches in magnitude spectra and were the main contribution to high-frequency ILDs (5-16kHz), whereas the rest of the head was the main contribution to ITDs and low-frequency ILDs (0.2-1.5kHz). Lastly, inter-individual differences in magnitude spectra were found to be small enough that deviations of individual HRTFs from an average HRTF were comparable in size to measurement error. Therefore, the average HRTF may be acceptable for use in neural or behavioral studies of rabbits implementing virtual acoustic space when measurement of individualized HRTFs is not possible.
Collapse
Affiliation(s)
- Mitchell L Day
- Department of Biological Sciences, Ohio University, Athens, OH 45701, USA.
| |
Collapse
|
15
|
Colas T, Farrugia N, Hendrickx E, Paquier M. Sound externalization in dynamic binaural listening: A comparative behavioral and EEG study. Hear Res 2023; 440:108912. [PMID: 37952369 DOI: 10.1016/j.heares.2023.108912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
Binaural reproduction aims at recreating a realistic sound scene at the ears of the listener using headphones. Unfortunately, externalization for frontal and rear sources is often poor (virtual sources are perceived inside the head, instead of outside the head). Nevertheless, previous studies have shown that large head-tracked movements could substantially improve externalization and that this improvement persisted once the subject had stopped moving his/her head. The present study investigates the relation between externalization and evoked response potentials (ERPs) by performing behavioral and EEG measurements in the same experimental conditions. Different degrees of externalization were achieved by preceding measurements with 1) head-tracked movements, 2) untracked head movements, and 3) no head movement. Results showed that performing a head movement, whether the head tracking was active or not, increased the amplitude of ERP components after 100 ms, which suggests that preceding head movements alters the auditory processing. Moreover, untracked head movements gave a stronger amplitude on the N1 component, which might be a marker of a consistency break in regards to the real world. While externalization scores were higher after head-tracked movements in the behavioral experiment, no marker of externalization could be found in the EEG results.
Collapse
Affiliation(s)
- Tom Colas
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France.
| | - Nicolas Farrugia
- IMT Atlantique, CNRS Lab-STICC UMR 6285, 655 avenue du Technopole, 29280 Plouzane, France
| | - Etienne Hendrickx
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| | - Mathieu Paquier
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| |
Collapse
|
16
|
Omichi R, Kariya S, Maeda Y, Fukushima K, Kataoka Y, Sugaya A, Nishizaki K, Ando M. Cochlear Implantation in the Poorer-Hearing Ear Is a Reasonable Choice. Acta Med Okayama 2023; 77:589-593. [PMID: 38145932 DOI: 10.18926/amo/66150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Choosing the optimal side for cochlear implantation (CI) remains a major challenge because of the lack of evidence. We investigated the choice of the surgery side for CI (i.e., the better- or poorer-hearing ear) in patients with asymmetric hearing. Audiological records of 74 adults with a unilateral hearing aid who had undergone surgery at Okayama University Hospital were reviewed. The definition of 'better-hearing ear' was the aided ear, and the unaided ear was considered the poorer-hearing ear. We performed a multiple regression analysis to identify potential predictors of speech recognition performance after unilateral CI in the patients. Fifty-two patients underwent CI in the poorer-hearing ear. The post-Ci bimodal hearing rate was far higher in the poorer-ear group (77.8% vs. 22.2%). A multivariate analysis revealed that prelingual hearing loss and the patient's age at CI significantly affected the speech recognition outcome (beta coefficients: 24.6 and -0.33, 95% confidence intervals [11.75-37.45] and [-0.58 to -0.09], respectively), but the CI surgery side did not (-6.76, [-14.92-1.39]). Unilateral CI in the poorer-hearing ear may therefore be a reasonable choice for adult patients with postlingual severe hearing loss, providing a greater opportunity for postoperative bimodal hearing.
Collapse
Affiliation(s)
- Ryotaro Omichi
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| | - Shin Kariya
- Department of Otolaryngology-Head and Neck Surgery, Kawasaki Medial University
| | - Yukihide Maeda
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| | | | - Yuko Kataoka
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| | - Akiko Sugaya
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| | - Kazunori Nishizaki
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| | - Mizuo Ando
- Department of Otolaryngology-Head and Neck Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences
| |
Collapse
|
17
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
18
|
Körtje M, Stöver T, Baumann U, Weissgerber T. Impact of processing-latency induced interaural delay and level discrepancy on sensitivity to interaural level differences in cochlear implant users. Eur Arch Otorhinolaryngol 2023; 280:5241-5249. [PMID: 37219685 PMCID: PMC10620283 DOI: 10.1007/s00405-023-08013-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/08/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE This study investigated whether an interaural delay, e.g. caused by the processing latency of a hearing device, can affect sensitivity to interaural level differences (ILDs) in normal hearing subjects or cochlear implant (CI) users with contralateral normal hearing (SSD-CI). METHODS Sensitivity to ILD was measured in 10 SSD-CI subjects and in 24 normal hearing subjects. The stimulus was a noise burst presented via headphones and via a direct cable connection (CI). ILD sensitivity was measured for different interaural delays in the range induced by hearing devices. ILD sensitivity was correlated with results obtained in a sound localization task using seven loudspeakers in the frontal horizontal plane. RESULTS In the normal hearing subjects the sensitivity to interaural level differences deteriorated significantly with increasing interaural delays. In the CI group, no significant effect of interaural delays on ILD sensitivity was found. The NH subjects were significantly more sensitive to ILDs. The mean localization error in the CI group was 10.8° higher than in the normal hearing group. No correlation between sound localization ability and ILD sensitivity was found. CONCLUSION Interaural delays influence the perception of ILDs. For normal hearing subjects a significant decrement in sensitivity to ILD was measured. The effect could not be confirmed in the tested SSD-CI group, probably due to a small subject group with large variations. The temporal matching of the two sides may be beneficial for ILD processing and thus sound localization for CI patients. However, further studies are needed for verification.
Collapse
Affiliation(s)
- Monika Körtje
- ENT Department, Audiological Acoustics, Goethe University Frankfurt, University Hospital Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt (Main), Germany.
| | - Timo Stöver
- ENT Department, Goethe University Frankfurt, University Hospital Frankfurt, Frankfurt (Main), Germany
| | - Uwe Baumann
- ENT Department, Audiological Acoustics, Goethe University Frankfurt, University Hospital Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt (Main), Germany
| | - Tobias Weissgerber
- ENT Department, Audiological Acoustics, Goethe University Frankfurt, University Hospital Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt (Main), Germany
| |
Collapse
|
19
|
Orr J, Ebel W, Gai Y. Localizing concurrent sound sources with binaural microphones: A simulation study. Hear Res 2023; 439:108884. [PMID: 37748242 DOI: 10.1016/j.heares.2023.108884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 09/04/2023] [Accepted: 09/09/2023] [Indexed: 09/27/2023]
Abstract
The human auditory system can localize multiple sound sources using time, intensity, and frequency cues in the sound received by the two ears. Being able to spatially segregate the sources helps perception in a challenging condition when multiple sounds coexist. This study used model simulations to explore an algorithm for localizing multiple sources in azimuth with binaural (i.e., two) microphones. The algorithm relies on the "sparseness" property of daily signals in the time-frequency domain, and sound coming from different locations carrying unique spatial features will form clusters. Based on an interaural normalization procedure, the model generated spiral patterns for sound sources in the frontal hemifield. The model itself was created using broadband noise for better accuracy, because speech typically has sporadic energy at high frequencies. The model at an arbitrary frequency can be used to predict locations of speech and music that occurred alone or concurrently, and a classification algorithm was applied to measure the localization error. Under anechoic conditions, averaged errors in azimuth increased from 4.5° to 19° with RMS errors ranging from 6.4° to 26.7° as model frequency increased from 300 to 3000 Hz. The low-frequency model performance using short speech sound was notably better than the generalized cross-correlation model. Two types of room reverberations were then introduced to simulate difficult listening conditions. Model performance under reverberation was more resilient at low frequencies than at high frequencies. Overall, our study presented a spiral model for rapidly predicting horizontal locations of concurrent sound that is suitable for real-world scenarios.
Collapse
Affiliation(s)
- Jakeh Orr
- School of Science and Engineering, Saint Louis University, Saint Louis, MO 63105, USA.
| | - William Ebel
- School of Science and Engineering, Saint Louis University, Saint Louis, MO 63105, USA
| | - Yan Gai
- School of Science and Engineering, Saint Louis University, Saint Louis, MO 63105, USA
| |
Collapse
|
20
|
Sitdikov VM, Gvozdeva AP, Andreeva IG. A quick method for determining the relative minimum audible distance using sound images. Atten Percept Psychophys 2023; 85:2718-2730. [PMID: 36949259 DOI: 10.3758/s13414-023-02663-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2023] [Indexed: 03/24/2023]
Abstract
Auditory localization plays an essential role in various tasks, including spatial orientation, locomotion, attention and memory. Optimization of experimental routine is important for preliminary assessment of the subject's sound localization ability. In the present study, a new quick technique for estimating the relative minimum audible distance (RMAD) using sound images is introduced. Twenty adults with normal hearing took part in six RMAD measurements in free field. The reference RMAD values were obtained using a method of constant stimuli by physically positioning a real sound source. The same method was used with stationary sound images created by superposition of signals emitted by two loudspeakers. To optimize the measurements, the RMADs were determined for the sound images using two adaptive psychoacoustic procedures known as one-down, one-up and two-down, one-up staircases. The group-average RMADs obtained by the method of constant stimuli for both types of stimuli and by two adaptive procedures were similar, 7% (SD = 2%). The effect of whether subjects were sighted or blindfolded was not significant for measurements of RMAD to sound images. The average measurement times were 373 s (SD = 20 s) for the method of constant stimuli, 85 s (SD = 9 s) for the one-down, one-up, and 124 s (SD = 14 s) for the two-down, one-up procedure. The results are consistent with the previous studies and confirm the validity of the measurements of RMAD using adaptive procedures with stationary sound images as a quick method.
Collapse
Affiliation(s)
- V M Sitdikov
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia
| | - A P Gvozdeva
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia
| | - I G Andreeva
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia.
| |
Collapse
|
21
|
Andren KG, Duffin K, Ryan MT, Riley CA, Tolisano AM. Postoperative optimization of cochlear implantation for single sided deafness and asymmetric hearing loss: a systematic review. Cochlear Implants Int 2023; 24:342-353. [PMID: 37490782 DOI: 10.1080/14670100.2023.2239512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
OBJECTIVE Identify and evaluate the effectiveness of methods for improving postoperative cochlear implant (CI) hearing performance in subjects with single-sided deafness (SSD) and asymmetric hearing loss (AHL). DATA SOURCES Embase, PubMed, Scopus. REVIEW METHODS Systematic review and narrative synthesis. English language studies of adult CI recipients with SSD and AHL reporting a postoperative intervention and comparative audiometric data pertaining to speech in noise, speech in quiet and sound localization were included. RESULTS 32 studies met criteria for full text review and 6 (n = 81) met final inclusion criteria. Interventions were categorized as: formal auditory training, programming techniques, or hardware optimization. Formal auditory training (n = 10) found no objective improvement in hearing outcomes. Experimental CI maps did not improve audiologic outcomes (n = 9). Programed CI signal delays to improve synchronization demonstrated improved sound localization (n = 12). Hardware optimization, including multidirectional (n = 29) and remote (n = 11) microphones, improved sound localization and speech in noise, respectively. CONCLUSION Few studies meeting inclusion criteria and small sample sizes highlight the need for further study. Formal auditory training did not appear to improve hearing outcomes. Programming techniques, such as CI signal delay, and hardware optimization, such as multidirectional and remote microphones, show promise to improve outcomes for SSD and AHL CI users.
Collapse
Affiliation(s)
- Kristofer G Andren
- Department of Otolaryngology - Head & Neck Surgery, San Antonio Uniformed Services Health Education Consortium, San Antonio, TX, USA
| | - Kevin Duffin
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Matthew T Ryan
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Charles A Riley
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Anthony M Tolisano
- Department of Otolaryngology - Head & Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
22
|
Berthomieu G, Koehl V, Paquier M. Loudness constancy for noise and speech: How instructions and source information affect loudness of distant sounds. Atten Percept Psychophys 2023; 85:2774-2796. [PMID: 37466907 DOI: 10.3758/s13414-023-02719-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/20/2023] [Indexed: 07/20/2023]
Abstract
The physical properties of a sound evolve when traveling away from its source. As an example, the sound pressure level at the listener's ears will vary according to their respective distance and azimuth. However, several studies have reported loudness to remain constant when varying the distance between the source and the listener. This loudness constancy has been reported to occur when the listener focused attention on the sound as emitted by the source (namely the distal stimulus). Instead, the listener can focus on the sound as reaching the ears (namely the proximal stimulus). The instructions given to the listener when assessing loudness can drive focus toward the proximal or distal stimulus. However, focusing on the distal stimulus requires to have sufficient information about the sound source, which could be provided by either the environment or by the stimulus itself. The present study gathers three experiments designed to assess loudness when driving listeners' focus toward the proximal or distal stimuli. Listeners were provided with different quality and quantity of information about the source depending on the environment (visible or hidden sources, free field or reverberant rooms) and on the stimulus itself (noise or speech). The results show that listeners reported constant loudness when asked to focus on the distal stimulus only, provided enough information about the source was available. These results highlight that loudness relies on the way the listener focuses on the stimuli and emphasize the importance of the instructions that are given in loudness studies.
Collapse
Affiliation(s)
| | - Vincent Koehl
- Univ Brest, Lab-STICC, CNRS, UMR 6285, F-29200, Brest, France
| | - Mathieu Paquier
- Univ Brest, Lab-STICC, CNRS, UMR 6285, F-29200, Brest, France
| |
Collapse
|
23
|
齐 映, 张 珂. [Intervention effects of bone conduction hearing aids in patients with single-sided deafness and asymmetric hearing loss]. Lin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2023; 37:927-933. [PMID: 37905490 PMCID: PMC10985660 DOI: 10.13201/j.issn.2096-7993.2023.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/10/2022] [Indexed: 11/02/2023]
Abstract
The incidence of single-sided deafness(SSD) is increasing year by year. Due to the hearing defects of one ear, the ability of sound localization, speech recognition in noise, and quality of life of patients with single-sided deafness will be affected to varying degrees. This article reviews the intervention effects of different types of bone conduction hearing aids in patients with single-sided deafness and asymmetric hearing loss, and the differences of intervention effects between bone conduction hearing aids, contralateral routing of signal(CROS) aids, and cochlea implant(CI), to provide a reference for the auditory intervention and clinical treatment of single-sided deafness and asymmetric hearing loss.
Collapse
Affiliation(s)
- 映婷 齐
- 北京大学第三医院耳鼻咽喉头颈外科(北京,100191)Department of Otolaryngology Head and Neck Surgery, Peking University Third Hospital, Beijing, 100191, China
| | - 珂 张
- 北京大学第三医院耳鼻咽喉头颈外科(北京,100191)Department of Otolaryngology Head and Neck Surgery, Peking University Third Hospital, Beijing, 100191, China
| |
Collapse
|
24
|
Potier S, Roulin A, Martin GR, Portugal SJ, Bonhomme V, Bouchet T, de Romans R, Meyrier E, Kelber A. Binocular field configuration in owls: the role of foraging ecology. Proc Biol Sci 2023; 290:20230664. [PMID: 37848065 PMCID: PMC10581762 DOI: 10.1098/rspb.2023.0664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/12/2023] [Indexed: 10/19/2023] Open
Abstract
The binocular field of vision differs widely in birds depending on ecological traits such as foraging. Owls (Strigiformes) have been considered to have a unique binocular field, but whether it is related to foraging has remained unknown. While taking into account allometry and phylogeny, we hypothesized that both daily activity cycle and diet determine the size and shape of the binocular field in owls. Here, we compared the binocular field configuration of 23 species of owls. While we found no effect of allometry and phylogeny, ecological traits strongly influence the binocular field shape and size. Binocular field shape of owls significantly differed from that of diurnal raptors. Among owls, binocular field shape was relatively conserved, but binocular field size differed among species depending on ecological traits, with larger binocular fields in species living in dense habitat and foraging on invertebrates. Our results suggest that (i) binocular field shape is associated with the time of foraging in the daily cycle (owls versus diurnal raptors) and (ii) that binocular field size differs between closely related owl species even though the general shape is conserved, possibly because the field of view is partially restricted by feathers, in a trade-off with auditory localization.
Collapse
Affiliation(s)
- Simon Potier
- Department of Biology, Lund University, Sölvegatan 35, Lund S-22362, Sweden
- Les Ailes de l'Urga, 72 rue de la vieille route, 27320 Marcilly la Campagne, France
| | - Alexandre Roulin
- Department of Ecology and Evolution, University of Lausanne, Biophore 1015, Switzerland
| | - Graham R. Martin
- School of Biosciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Steven J. Portugal
- Department of Biological Science, School of Life and Environmental Sciences, Royal Holloway University of London, Egham, Surrey TW20 0EX, UK
| | - Vincent Bonhomme
- ISEM, Univ Montpellier, CNRS, EPHE, IRD, 34095 Montpellier, France
- Équipe Dynamique de la biodiversité, anthropo-écologie, Place Eugène Bataillon - CC065, 34095 Montpellier Cedex 5, France
| | - Thierry Bouchet
- Académie de Fauconnerie, SAS Puy du Fou France, 85500 Les Epesses, France
| | - Romuald de Romans
- Espace Rambouillet, Office National des Forêts, route du coin du bois, 78120 Sonchamp, France
| | - Eva Meyrier
- Les Aigles du Léman, Domaine de Guidou, 74140 Sciez sur Léman, France
| | - Almut Kelber
- Department of Biology, Lund University, Sölvegatan 35, Lund S-22362, Sweden
| |
Collapse
|
25
|
Roup CM, Ferguson SD, Lander D. The relationship between extended high-frequency hearing and the binaural spatial advantage in young to middle-aged firefightersa). J Acoust Soc Am 2023; 154:2055-2059. [PMID: 37782123 DOI: 10.1121/10.0021172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/13/2023] [Indexed: 10/03/2023]
Abstract
Relationships between extended high-frequency (EHF) thresholds and speech-in-spatialized noise were examined in firefighters with a history of occupational noise and airborne toxin exposure. Speech recognition thresholds were measured for co-located and spatially separated (±90° azimuth) sentences in a competing signal using the Listening in Spatialized Noise-Sentences test. EHF hearing was significantly correlated with the spatial advantage, indicating that firefighters with poorer EHF thresholds experienced less benefit from spatial separation. The correlation between EHF thresholds and spatial hearing remained significant after controlling for age. Deficits in EHF and spatial hearing suggest firefighters may experience compromised speech understanding in job-related complex acoustic environments.
Collapse
Affiliation(s)
- Christina M Roup
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, 110 Pressey Hall, Columbus, Ohio 43210, USA
| | - Sarah D Ferguson
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, 110 Pressey Hall, Columbus, Ohio 43210, USA
| | - Devan Lander
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, 110 Pressey Hall, Columbus, Ohio 43210, USA
| |
Collapse
|
26
|
Öz O, D'Alessandro HD, Batuk MÖ, Sennaroğlu G, Govaerts PJ. Assessment of Binaural Benefits in Hearing and Hearing-Impaired Listeners. J Speech Lang Hear Res 2023; 66:3633-3648. [PMID: 37494143 DOI: 10.1044/2023_jslhr-23-00077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
PURPOSE The purpose of this study was to (a) investigate which speech material is most appropriate as stimulus in head shadow effect (HSE) and binaural squelch (SQ) tests, (b) obtain normative values of both tests using the material decided to be optimal, and (c) explore the results in bilateral cochlear implant (CI) users. METHOD Study participants consisted of 30 normal-hearing (NH) persons and 34 bilateral CI users. This study consisted of three phases. In the first phase, three different speech materials (1) monosyllabic words, (2) spondee words, and (3) sentences were compared in terms of (a) effect size, (b) test-retest reliability, and (c) interindividual variability. In the second phase, the speech material selected in the first phase was used to test a further 24 NHs to obtain normative values for both tests. In the third phase, tests were administered to a further 23 bilateral CI users, together with localization test and the Speech, Spatial, and Qualities of Hearing scale. RESULTS The results of the first phase indicated that spondees and sentences were more robust materials compared with monosyllables. Although the effect size and interindividual variability were comparable for spondees and sentences, sentences had higher test-retest reliability in this sample of CI users. With sentences, the mean (± standard deviation) HSE and SQ in the NH group were 58 ± 14% and 22 ± 11%, respectively. In the CI group, the mean HSE and SQ were 49 ± 13% and 13 ± 14%, respectively. There were no statistically significant correlations between the test results and the interval between the implantations, the length of binaural listening experience, or the asymmetry between the ears. CONCLUSIONS Sentences are preferred as stimulus material in the binaural HSE and SQ tests. Normative data are given for HSE and SQ with the LiCoS (linguistically controlled sentences) test. HSE is present for all bilateral CI users, whereas SQ is present in approximately seven out of 10 cases.
Collapse
Affiliation(s)
- Okan Öz
- The Eargroup, Antwerp, Belgium
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | | | - Merve Özbal Batuk
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Gonca Sennaroğlu
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Paul J Govaerts
- The Eargroup, Antwerp, Belgium
- Faculty of Medicine and Health Sciences, Translational Neurosciences, Otorhinolaryngology & Head and Neck Surgery, University of Antwerp, Belgium
| |
Collapse
|
27
|
Park LR, Dillon MT, Buss E, Brown KD. Two-Year Outcomes of Cochlear Implant Use for Children With Unilateral Hearing Loss: Benefits and Comparison to Children With Normal Hearing. Ear Hear 2023; 44:955-968. [PMID: 36879386 PMCID: PMC10426784 DOI: 10.1097/aud.0000000000001353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 01/24/2023] [Indexed: 03/08/2023]
Abstract
OBJECTIVES Children with severe-to-profound unilateral hearing loss, including cases of single-sided deafness (SSD), lack access to binaural cues that support spatial hearing, such as recognizing speech in complex multisource environments and sound source localization. Listening in a monaural condition negatively impacts communication, learning, and quality of life for children with SSD. Cochlear implant (CI) use may restore binaural hearing abilities and improve outcomes as compared to alternative treatments or no treatment. This study investigated performance over 24 months of CI use in young children with SSD as compared to the better hearing ear alone and to children with bilateral normal hearing (NH). DESIGN Eighteen children with SSD who received a CI between the ages of 3.5 and 6.5 years as part of a prospective clinical trial completed assessments of word recognition in quiet, masked sentence recognition, and sound source localization at regular intervals out to 24-month postactivation. Eighteen peers with bilateral NH, matched by age at the group level, completed the same test battery. Performance at 24-month postactivation for the SSD group was compared to the performance of the NH group. RESULTS Children with SSD have significantly poorer speech recognition in quiet, masked sentence recognition, and localization both with and without the use of the CI than their peers with NH. The SSD group experienced significant benefits with the CI+NH versus the NH ear alone on measures of isolated word recognition, masked sentence recognition, and localization. These benefits were realized within the first 3 months of use and were maintained through the 24-month postactivation interval. CONCLUSIONS Young children with SSD who use a CI experience significant isolated word recognition and bilateral spatial hearing benefits, although their performance remains poorer than their peers with NH.
Collapse
Affiliation(s)
- Lisa R. Park
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Margaret T. Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Kevin D. Brown
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| |
Collapse
|
28
|
Xie B, Liu L, Jiang J, Zhang C, Zhao T. Auditory vertical localization in the median plane with conflicting dynamic interaural time difference and other elevation cues. J Acoust Soc Am 2023; 154:1770-1786. [PMID: 37721403 DOI: 10.1121/10.0020909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/22/2023] [Indexed: 09/19/2023]
Abstract
Both dynamic variation of interaural time difference (ITD) and static spectral cues provide information for front-back discrimination and vertical localization. However, the contributions of the two cues are still unclear. The static spectral cue has conventionally been regarded as the dominant one. In the present work, psychoacoustic experiments were conducted to examine the contribution of dynamic ITD and static spectral cues to vertical localization in the median plane. By modifying the head-related transfer functions used in a dynamic virtual auditory display, binaural signals with conflicting dynamic ITD and spectral cues that were either static or dynamically modified according to instantaneous head position were created. The results indicated that the dynamic ITD and static spectral cues contribute to vertical localization at low and high frequencies, respectively. For full a bandwidth stimulus, conflicting dynamic ITD and static spectral cues usually result in two separated virtual sources at different elevations corresponding to the spatial information conveyed by the low- and high-frequency bands, respectively. In most cases, no fused localization occurs in the high-level cognition system. Therefore, dynamic ITD and static spectral cues contribute to vertical localization at different frequency ranges, and neither of them dominates vertical localization in the case of wideband stimuli.
Collapse
Affiliation(s)
- Bosun Xie
- Acoustic Lab, School of Physics and Optoeletronics, South China University of Technology, Guangzhou, 510641, China
| | - Lulu Liu
- Acoustic Lab, School of Physics and Optoeletronics, South China University of Technology, Guangzhou, 510641, China
| | - Jianliang Jiang
- Acoustic Lab, School of Physics and Optoeletronics, South China University of Technology, Guangzhou, 510641, China
| | - Chengyun Zhang
- School of Mechanical & Electrical Engineering, Guangzhou University, Guangzhou, 510006, China
| | - Tong Zhao
- Acoustic Lab, School of Physics and Optoeletronics, South China University of Technology, Guangzhou, 510641, China
| |
Collapse
|
29
|
Xiong YZ, Addleman DA, Nguyen NA, Nelson P, Legge GE. Dual Sensory Impairment: Impact of Central Vision Loss and Hearing Loss on Visual and Auditory Localization. Invest Ophthalmol Vis Sci 2023; 64:23. [PMID: 37703039 PMCID: PMC10503591 DOI: 10.1167/iovs.64.12.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 08/17/2023] [Indexed: 09/14/2023] Open
Abstract
Purpose In the United States, AMD is a leading cause of low vision that leads to central vision loss and has a high co-occurrence with hearing loss. The impact of central vision loss on the daily functioning of older individuals cannot be fully addressed without considering their hearing status. We investigated the impact of combined central vision loss and hearing loss on spatial localization, an ability critical for social interactions and navigation. Methods Sixteen older adults with central vision loss primarily due to AMD, with or without co-occurring hearing loss, completed a spatial perimetry task in which they verbally reported the directions of visual or auditory targets. Auditory testing was done with eyes open in a dimly lit room or with a blindfold. Twenty-three normally sighted, age-matched, and hearing-matched control subjects also completed the task. Results Subjects with central vision loss missed visual targets more often. They showed increased deviations in visual biases from control subjects as the scotoma size increased. However, these deficits did not generalize to sound localization. As hearing loss became more severe, the sound localization variability increased, and this relationship was not altered by coexisting central vision loss. For both control and central vision loss subjects, sound localization was less reliable when subjects wore blindfolds, possibly due to the absence of visual contextual cues. Conclusions Although central vision loss impairs visual localization, it does not impair sound localization and does not prevent vision from providing useful contextual cues for sound localization.
Collapse
Affiliation(s)
- Ying-Zi Xiong
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
| | - Douglas A. Addleman
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, United States
| | - Nam Anh Nguyen
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
| | - Peggy Nelson
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, United States
| | - Gordon E. Legge
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
| |
Collapse
|
30
|
de Carvalho NG, do Amaral MIR, Colella-Santos MF. AudBility: an online program for central auditory processing screening in school-aged children from 6 to 8 years old. Codas 2023; 35:e20220011. [PMID: 37646741 PMCID: PMC10547135 DOI: 10.1590/2317-1782/20232022011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 11/30/2022] [Indexed: 09/01/2023] Open
Abstract
PURPOSE To analyze the performance of students aged between in an auditory skills screening software program, considering the influence of biological determinants and the correlation of auditory tasks with the behavioral assessment tests of central auditory processing (PAC), as well as to present the cutoff points of the battery. METHODS In the first stage, the sample consisted of 96 students with typical development, who underwent hearing screening at school. A self-perception questionnaire and the auditory tasks of sound localization (SL), temporal resolution (TR), temporal ordering of frequency (OT-F) and duration (OT-D), auditory closure (AC), dichotic digit- binaural integration (DD) and figure-ground (FG) were applied. Of these, 66 children participated in the second stage of the study, including basic and behavioral audiological assessment from PAC. RESULTS The gender variable influenced the DD task to the right ear. Age influenced the outcome of five auditory tasks. The right ear performed better in the DD and OT-F tasks. At the age between 6 and 7 years, there was a correlation between screening and diagnosis in the tasks of AC, TR, DD, FG, and OT-F. At the age of 8 years, there was a correlation in the DD and OT-F tasks. The pass/fail criteria varied according to the task and biological determinants. CONCLUSION There was a correlation between screening and diagnosis in a greater number of tasks in the age group between 6 and 7 years. The cut-off points for the auditory tasks should be analyzed according to age, sex and/or ear side.
Collapse
Affiliation(s)
- Nádia Giulian de Carvalho
- Programa de Pós-graduação em Saúde, Interdisciplinaridade e Reabilitação, Departamento de Desenvolvimento Humano e Reabilitação - DDHR, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
| | - Maria Isabel Ramos do Amaral
- Departamento de Desenvolvimento Humano e Reabilitação - DDHR, Faculdade de Ciências Médicas - FCM, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
| | - Maria Francisca Colella-Santos
- Departamento de Desenvolvimento Humano e Reabilitação - DDHR, Faculdade de Ciências Médicas - FCM, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
- Centro de Investigação em Pediatria, Faculdade de Ciências Médicas - FCM, Universidade Estadual de Campinas - UNICAMP - Campinas (SP), Brasil.
| |
Collapse
|
31
|
Fink N, Levitas R, Eisenkraft A, Wagnert-Avraham L, Gertz SD, Fostick L. Perforated Concave Earplug (pCEP): A Proof-of-Concept Earplug to Improve Sound Localization without Compromising Noise Attenuation. Sensors (Basel) 2023; 23:7410. [PMID: 37687865 PMCID: PMC10490414 DOI: 10.3390/s23177410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/16/2023] [Accepted: 08/23/2023] [Indexed: 09/10/2023]
Abstract
Combat soldiers are currently faced with using a hearing-protection device (HPD) at the cost of adequately detecting critical signals impacting mission success. The current study tested the performance of the Perforated-Concave-Earplug (pCEP), a proof-of-concept passive HPD consisting of a concave bowl-like rigid structure attached to a commercial roll-down earplug, designed to improve sound localization with minimal compromising of noise attenuation. Primarily intended for combat/military training settings, our aim was an evaluation of localization of relevant sound sources (single/multiple gunfire, continuous noise, spoken word) compared to 3M™-Combat-Arms™4.1 earplugs in open-mode and 3M™-E-A-R™-Classic™ earplugs. Ninety normal-hearing participants, aged 20-35 years, were asked to localize stimuli delivered from monitors evenly distributed around them in no-HPD and with-HPD conditions. The results showed (1) localization abilities worsened using HPDs; (2) the spoken word was localized less accurately than other stimuli; (3) mean root mean square errors (RMSEs) were largest for stimuli emanating from rear monitors; and (4) localization abilities corresponded to HPD attenuation levels (largest attenuation and mean RMSE: 3M™-E-A-R™-Classic™; smallest attenuation and mean RMSE: 3M™-Combat-Arms™4.1; pCEP was mid-range on both). These findings suggest that the pCEP may benefit in military settings by providing improved sound localization relative to 3M™ E-A-R™-Classic™ and higher attenuation relative to 3M™-Combat Arms™-4.1, recommending its use in noisy environments.
Collapse
Affiliation(s)
- Nir Fink
- Department of Communication Disorders, Acoustics and Noise Research Lab in the Name of Laurent Levy, Ariel University, Ariel 40700, Israel
- Israel Defense Forces Medical Corps, Hakirya 6473424, Israel;
| | - Rachel Levitas
- Israel Defense Forces Medical Corps, Hakirya 6473424, Israel;
| | - Arik Eisenkraft
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
| | - Linn Wagnert-Avraham
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
| | - S. David Gertz
- Institute for Research in Military Medicine (IRMM), Faculty of Medicine of The Hebrew University of Jerusalem and the Israel Defense Forces Medical Corps, Jerusalem 9112102, Israel; (A.E.); (L.W.-A.); (S.D.G.)
- The Saul and Joyce Brandman Hub for Cardiovascular Research and the Department of Medical Neurobiology, Institute for Medical Research (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 9112102, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel 40700, Israel;
| |
Collapse
|
32
|
Jünemann P, Schneider A, Waßmuth J. Direction-of-arrival estimation for acoustic signals based on direction-dependent parameter tuning of a bioinspired binaural coupling system. Bioinspir Biomim 2023; 18:056004. [PMID: 37413997 DOI: 10.1088/1748-3190/ace50a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/06/2023] [Indexed: 07/08/2023]
Abstract
Bioinspired methods for sound source localization offer opportunities for resource reduction as well as concurrent performance improvement in contrast to conventional techniques. Usually, sound source localization requires a large number of microphones arranged in irregular geometries, and thus has high resource requirements in terms of space and data processing. Motivated by biology and using digital signal processing methods, an approach that adapts the coupled hearing system of the flyOrmia ochraceawith a minimally distant two-microphone array is presented. Despite its physiology, the fly is able to overcome physical limitations in localizing low-frequency sound sources. By exploiting the filtering effect of the coupling system, the direction-of-arrival of the sound is determined with two microphones at an intermediate distance of 0.06 m. For conventional beamforming algorithms, these physical limitations would result in degraded localization performance. In this work, the bioinspired coupling system is analyzed and subsequently parameterized direction-sensitive for different directions of incidence of the sound. For the parameterization, an optimization method is presented which can be adopted for excitation with plane as well as spherical sound wave propagation. Finally, the method was assessed using simulated and measured data. For 90% of the simulated scenarios, the correct direction of incidence could be determined with an accuracy of less than 1∘despite the use of a minimal distant two-microphone array. The experiments with measured data also resulted in a correct determination of the direction of incidence, which qualifies the bioinspired method for practical use in digital hardware systems.
Collapse
Affiliation(s)
- Philipp Jünemann
- Biomechatronics and Embedded Systems Group, Faculty of Engineering and Mathematics, University of Applied Sciences and Arts, Bielefeld, Germany
- Institute of System Dynamics and Mechatronics, University of Applied Sciences and Arts, Bielefeld, Germany
| | - Axel Schneider
- Biomechatronics and Embedded Systems Group, Faculty of Engineering and Mathematics, University of Applied Sciences and Arts, Bielefeld, Germany
- Institute of System Dynamics and Mechatronics, University of Applied Sciences and Arts, Bielefeld, Germany
| | - Joachim Waßmuth
- Biomechatronics and Embedded Systems Group, Faculty of Engineering and Mathematics, University of Applied Sciences and Arts, Bielefeld, Germany
- Institute of System Dynamics and Mechatronics, University of Applied Sciences and Arts, Bielefeld, Germany
| |
Collapse
|
33
|
Yan HQ, Li HT, Li XS, Gong SS. [Effect of age-related hearing loss on cognitive function and sound localization]. Zhonghua Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2023; 58:812-816. [PMID: 37599247 DOI: 10.3760/cma.j.cn115330-20221013-00608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Affiliation(s)
- H Q Yan
- Department of Otorhinolaryngology, Beijing Friendship Hospital, Capital Medical University, Clinical Center for Hearing Loss, Capital Medical University, Beijing 100050, China
| | - H T Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - X S Li
- Department of radiology, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - S S Gong
- Department of Otorhinolaryngology, Beijing Friendship Hospital, Capital Medical University, Clinical Center for Hearing Loss, Capital Medical University, Beijing 100050, China
| |
Collapse
|
34
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
35
|
Brown AD, Hayward T, Portfors CV, Coffin AB. On the value of diverse organisms in auditory research: From fish to flies to humans. Hear Res 2023; 432:108754. [PMID: 37054531 PMCID: PMC10424633 DOI: 10.1016/j.heares.2023.108754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/28/2023] [Accepted: 03/27/2023] [Indexed: 03/31/2023]
Abstract
Historically, diverse organisms have contributed to our understanding of auditory function. In recent years, the laboratory mouse has become the prevailing non-human model in auditory research, particularly for biomedical studies. There are many questions in auditory research for which the mouse is the most appropriate (or the only) model system available. But mice cannot provide answers for all auditory problems of basic and applied importance, nor can any single model system provide a synthetic understanding of the diverse solutions that have evolved to facilitate effective detection and use of acoustic information. In this review, spurred by trends in funding and publishing and inspired by parallel observations in other domains of neuroscience, we highlight a few examples of the profound impact and lasting benefits of comparative and basic organismal research in the auditory system. We begin with the serendipitous discovery of hair cell regeneration in non-mammalian vertebrates, a finding that has fueled an ongoing search for pathways to hearing restoration in humans. We then turn to the problem of sound source localization - a fundamental task that most auditory systems have been compelled to solve despite large variation in the magnitudes and kinds of spatial acoustic cues available, begetting varied direction-detecting mechanisms. Finally, we consider the power of work in highly specialized organisms to reveal exceptional solutions to sensory problems - and the diverse returns of deep neuroethological inquiry - via the example of echolocating bats. Throughout, we consider how discoveries made possible by comparative and curiosity-driven organismal research have driven fundamental scientific, biomedical, and technological advances in the auditory field.
Collapse
Affiliation(s)
- Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St, Seattle, WA, 98105 USA; Virginia-Merrill Bloedel Hearing Research Center, University of Washington, 1701 NE Columbia Rd, Seattle, WA, 98195 USA.
| | - Tamasen Hayward
- College of Arts and Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA
| | - Allison B Coffin
- College of Arts and Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA; School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA; Department of Integrative Physiology and Neuroscience, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA.
| |
Collapse
|
36
|
Daher GS, Kocharyan A, Dillon MT, Carlson ML. Cochlear Implantation Outcomes in Adults With Single-Sided Deafness: A Systematic Review and Meta-analysis. Otol Neurotol 2023; 44:297-309. [PMID: 36791341 DOI: 10.1097/mao.0000000000003833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
OBJECTIVE To assess spatial hearing, tinnitus, and quality-of-life outcomes in adults with single-sided deafness (SSD) who underwent cochlear implantation. DATABASES REVIEWED PubMed, MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and Scopus databases were searched from January 2008 to September 2021 following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. METHODS Studies reporting spatial hearing, tinnitus, and quality-of-life outcomes in adult cochlear implant (CI) recipients (≥18 yr old) with SSD were evaluated. Study characteristics, demographic data, spatial hearing (speech recognition in noise, sound source localization), tinnitus (severity, loudness), and quality-of-life outcomes were collected. RESULTS From an initial search of 1,147 articles, 36 studies that evaluated CI use in 796 unique adults with SSD (51.3 ± 12.4 yr of age at time of implantation) were included. The mean duration of deafness was 6.2 ± 9.6 years. There was evidence of improvement for speech recognition in noise using different target-to-masker spatial configurations, with the largest benefit observed for target-to-masker configurations assessing head shadow (mean, 1.87-6.2 dB signal-to-noise ratio). Sound source localization, quantified as root-mean-squared error, improved with CI use (mean difference [MD], -25.3 degrees; 95% confidence interval [95% CI], -35.9 to -14.6 degrees; p < 0.001). Also, CI users reported a significant reduction in tinnitus severity as measured with the Tinnitus Handicap Inventory (MD, -29.97; 95% CI, -43.9 to -16.1; p < 0.001) and an improvement in spatial hearing abilities as measured with the Spatial, Speech, and Qualities of Hearing questionnaire (MD, 2.3; 95% CI, 1.7 to 2.8; p < 0.001). CONCLUSIONS Cochlear implantation and CI use consistently offer improvements in speech recognition in noise, sound source localization, tinnitus, and perceived quality of life in adults with SSD.
Collapse
Affiliation(s)
- Ghazal S Daher
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Armine Kocharyan
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Margaret T Dillon
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Matthew L Carlson
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
37
|
Gordon KA, Alemu R, Papsin BC, Negandhi J, Cushing SL. Effects of Age at Implantation on Outcomes of Cochlear Implantation in Children with Short Durations of Single-Sided Deafness. Otol Neurotol 2023; 44:233-240. [PMID: 36728258 PMCID: PMC9924958 DOI: 10.1097/mao.0000000000003811] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Children with single-sided deafness (SSD) show reduced language and academic development and report hearing challenges. We aim to improve outcomes in children with SSD by providing bilateral hearing through cochlear implantation of the deaf ear with minimal delay. STUDY DESIGN Prospective cohort study of 57 children with SSD provided with cochlear implant (CI) between May 13, 2013, and June 25, 2021. SETTING Tertiary children's hospital. PARTICIPANTS Children with early onset (n = 40) or later onset of SSD (n = 17) received CIs at ages 2.47 ± 1.58 years (early onset group) and 11.67 ± 3.91 years (late onset group) (mean ± SD). Duration of unilateral deafness was limited (mean ± SD = 1.93 ± 1.56 yr). INTERVENTION Cochlear implantation of the deaf ear. MAIN OUTCOMES/MEASURES Evaluations of device use (data logging) and hearing (speech perception, effects of spatial release from masking on speech detection, localization of stationary and moving sound, self-reported hearing questionnaires). RESULTS Results indicated that daily device use is variable (mean ± SD = 5.60 ± 2.97, range = 0.0-14.7 h/d) with particular challenges during extended COVID-19 lockdowns, including school closures (daily use reduced by mean 1.73 h). Speech perception with the CI alone improved (mean ± SD = 65.7 ± 26.4 RAU) but, in the late onset group, remained poorer than in the normal hearing ear. Measures of spatial release from masking also showed asymmetric hearing in the late onset group ( t13 = 5.14, p = 0.001). Localization of both stationary and moving sound was poor (mean ± SD error = 34.6° ± 16.7°) but slightly improved on the deaf side with CI use ( F1,36 = 3.95, p = 0.05). Decreased sound localization significantly correlated with poorer self-reported hearing. CONCLUSIONS AND RELEVANCE Benefits of CI in children with limited durations of SSD may be more restricted for older children/adolescents. Spatial hearing challenges remain. Efforts to increase CI acceptance and consistent use are needed.
Collapse
Affiliation(s)
- Karen A. Gordon
- Department of Otolaryngology–Head and Neck Surgery, University of Toronto
- Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children
- Department of Communication Disorders, The Hospital for Sick Children
| | - Robel Alemu
- Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children
| | - Blake C. Papsin
- Department of Otolaryngology–Head and Neck Surgery, University of Toronto
- Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children
- Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada
| | - Jaina Negandhi
- Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children
| | - Sharon L. Cushing
- Department of Otolaryngology–Head and Neck Surgery, University of Toronto
- Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children
- Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
38
|
Mayo PG, Brown AD, Goupell MJ. Wave interference at the contralateral ear helps explain non-monotonic envelope interaural time differences as a function of azimuth. JASA Express Lett 2023; 3:034403. [PMID: 37003716 PMCID: PMC10041410 DOI: 10.1121/10.0017631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/22/2023] [Indexed: 06/19/2023]
Abstract
Interaural time differences (ITDs), an important acoustic cue for perceptual sound-source localization, are conventionally modeled as monotonic functions of azimuth. However, recent literature and publicly available databases from binaural manikins demonstrated ITDs conveyed by the envelopes (ENV-ITDs) of high-frequency (≥2 kHz) signals that were non-monotonic functions of azimuth. This study demonstrates using a simple, time-dependent geometric model of an elliptic head that the back-traveling (longer) sound path around the head, delayed and added to the conventionally treated front-traveling path, can account for non-monotonic ENV-ITDs. These findings have implications for spatial-hearing models in acoustic and electric (cochlear-implant) hearing.
Collapse
Affiliation(s)
- Paul G Mayo
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington 98105, USA , ,
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
39
|
Fay RR, Coombs S, Popper AN. The career and research contributions of Richard R. Fay. J Acoust Soc Am 2023; 153:761. [PMID: 36859129 DOI: 10.1121/10.0017098] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/12/2023] [Indexed: 06/18/2023]
Abstract
For over 50 years, Richard R. (Dick) Fay made major contributions to our understanding of vertebrate hearing. Much of Dick's work focused on hearing in fishes and, particularly, goldfish, as well as a few other species, in a substantial body of work on sound localization mechanisms. However, Dick's focus was always on using his studies to try and understand bigger issues of vertebrate hearing and its evolution. This article is slightly adapted from an article that Dick wrote in 2010 on the closure of the Parmly Hearing Institute at Loyola University Chicago. Except for small modifications and minor updates, the words and ideas herein are those of Dick.
Collapse
Affiliation(s)
- Richard R Fay
- Department of Psychology, Loyola University Chicago, Chicago, Illinois 60660, USA
| | - Sheryl Coombs
- Department of Biology, Bowling Green State University, Bowling Green, Ohio 43403, USA
| | - Arthur N Popper
- Department of Biology, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
40
|
Kim K, Hong Y. Gaussian Process Regression for Single-Channel Sound Source Localization System Based on Homomorphic Deconvolution. Sensors (Basel) 2023; 23:769. [PMID: 36679566 PMCID: PMC9865750 DOI: 10.3390/s23020769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 01/05/2023] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
To extract the phase information from multiple receivers, the conventional sound source localization system involves substantial complexity in software and hardware. Along with the algorithm complexity, the dedicated communication channel and individual analog-to-digital conversions prevent an increase in the system's capability due to feasibility. The previous study suggested and verified the single-channel sound source localization system, which aggregates the receivers on the single analog network for the single digital converter. This paper proposes the improved algorithm for the single-channel sound source localization system based on the Gaussian process regression with the novel feature extraction method. The proposed system consists of three computational stages: homomorphic deconvolution, feature extraction, and Gaussian process regression in cascade. The individual stages represent time delay extraction, data arrangement, and machine prediction, respectively. The optimal receiver configuration for the three-receiver structure is derived from the novel similarity matrix analysis based on the time delay pattern diversity. The simulations and experiments present precise predictions with proper model order and ensemble average length. The nonparametric method, with the rational quadratic kernel, shows consistent performance on trained angles. The Steiglitz-McBride model with the exponential kernel delivers the best predictions for trained and untrained angles with low bias and low variance in statistics.
Collapse
|
41
|
Valzolgher C, Alzaher M, Gaveau V, Coudert A, Marx M, Truy E, Barone P, Farnè A, Pavani F. Capturing Visual Attention With Perturbed Auditory Spatial Cues. Trends Hear 2023; 27:23312165231182289. [PMID: 37611181 PMCID: PMC10467228 DOI: 10.1177/23312165231182289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 05/25/2023] [Accepted: 05/29/2023] [Indexed: 08/25/2023] Open
Abstract
Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | - Mariam Alzaher
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | | | - Mathieu Marx
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Eric Truy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Hospices Civils de Lyon, Lyon, France
| | - Pascal Barone
- Centre de Recherche Cerveau & Cognition, Toulouse, France
| | - Alessandro Farnè
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Neuro-immersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Centro Interuniversitario di Ricerca « Cognizione, Linguaggio e Sordità », Rovereto, Italy
| |
Collapse
|
42
|
Best V, Boyd AD, Sen K. An Effect of Gaze Direction in Cocktail Party Listening. Trends Hear 2023; 27:23312165231152356. [PMID: 36691678 PMCID: PMC9896088 DOI: 10.1177/23312165231152356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 11/18/2022] [Accepted: 01/04/2023] [Indexed: 01/25/2023] Open
Abstract
It is well established that gaze direction can influence auditory spatial perception, but the implications of this interaction for performance in complex listening tasks is unclear. In the current study, we investigated whether there is a measurable effect of gaze direction on speech intelligibility in a "cocktail party" listening situation. We presented sequences of digits from five loudspeakers positioned at 0°, ± 15°, and ± 30° azimuth, and asked participants to repeat back the digits presented from a designated target loudspeaker. In different blocks of trials, the participant visually fixated on a cue presented at the target location or at a nontarget location. Eye position was tracked continuously to monitor compliance. Performance was best when fixation was on-target (vs. off-target) and the size of this effect depended on the specific configuration. This result demonstrates an influence of gaze direction in multitalker mixtures, even in the absence of visual speech information.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences,
Boston
University, Boston, MA, USA
| | - Alex D. Boyd
- Department of Biomedical Engineering,
Boston
University, Boston, MA, USA
| | - Kamal Sen
- Department of Biomedical Engineering,
Boston
University, Boston, MA, USA
| |
Collapse
|
43
|
Salles A, Wohlgemuth MJ, Moss CF. Neural coding of 3D spatial location, orientation, and action selection in echolocating bats. Trends Neurosci 2023; 46:5-7. [PMID: 36280458 PMCID: PMC9976350 DOI: 10.1016/j.tins.2022.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 09/30/2022] [Indexed: 12/28/2022]
Abstract
Echolocating bats are among the only mammals capable of powered flight, and they rely on active sensing to find food and steer around obstacles in 3D environments. These natural behaviors depend on neural circuits that support 3D auditory localization, audio-motor integration, navigation, and flight control, which are modulated by spatial attention and action selection.
Collapse
Affiliation(s)
- Angeles Salles
- Department of Biological Sciences, University of Illinois at Chicago, Chicago, IL 60607, USA; Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | - Cynthia F Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
44
|
Hládek Ľ, Seeber BU. Speech Intelligibility in Reverberation is Reduced During Self-Rotation. Trends Hear 2023; 27:23312165231188619. [PMID: 37475460 PMCID: PMC10363862 DOI: 10.1177/23312165231188619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 06/23/2023] [Accepted: 07/02/2023] [Indexed: 07/22/2023] Open
Abstract
Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.
Collapse
Affiliation(s)
- Ľuboš Hládek
- Audio Information Processing, Technical University of Munich, Munich, Germany
| | - Bernhard U. Seeber
- Audio Information Processing, Technical University of Munich, Munich, Germany
| |
Collapse
|
45
|
Sugarova SB, Kliachko DS, Shcherbakova YL, Kaliapin DD. [The time range in sequential bilateral cochlear implantation]. Vestn Otorinolaringol 2023; 88:19-22. [PMID: 37970765 DOI: 10.17116/otorino20238805119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2023]
Abstract
The article is devoted to the problems of binaural cochlear implantation, especially in patients with a long time interval between surgeries. The purpose of the study was to evaluate the effect of the time range between successive interventions in patients with binaural prosthetics using the CI system. MATERIALS AND METHODS the study included 50 patients aged 10 to 14 years, divided into 3 study groups: patients with unilateral cochlear implantation (group I), patients with bilateral implantation with a less than 1 year range between operations (group II) and patients with bilateral implantation with a more than 5 year range between interventions (group III). Comparative analysis was carried out using speech audiometry in silence and noise, assessment of sound localization and questionnaires to assess the auditory dynamics and speech development. RESULTS Patients in groups II and III showed comparable results in speech intelligibility in noise and sound localization. At the same time, these indicators turned out to be higher than in patients of group I. Patients from all three groups did not show statistically significant differences in speech intelligibility in silence and in the level of speech development. CONCLUSION a long interval (more than 5 years) after the first implantation should not be considered as a contraindication to binaural implantation.
Collapse
Affiliation(s)
- S B Sugarova
- St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia
| | - D S Kliachko
- St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia
| | - Ya L Shcherbakova
- St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia
| | - D D Kaliapin
- St. Petersburg Research Institute of Ear, Throat, Nose and Speech, St. Petersburg, Russia
| |
Collapse
|
46
|
Sheffield SW, Wheeler HJ, Brungart DS, Bernstein JGW. The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment. Trends Hear 2023; 27:23312165231186040. [PMID: 37415497 PMCID: PMC10331332 DOI: 10.1177/23312165231186040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/13/2023] [Accepted: 06/17/2023] [Indexed: 07/08/2023] Open
Abstract
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| | - Harley J. Wheeler
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas S. Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| |
Collapse
|
47
|
Jakobsen Y, Christensen Andersen LA, Schmidt JH. Study protocol for a randomised controlled trial evaluating the benefits from bimodal solution with cochlear implant and hearing aid versus bilateral hearing aids in patients with asymmetric speech identification scores. BMJ Open 2022; 12:e070296. [PMID: 36581413 PMCID: PMC9806092 DOI: 10.1136/bmjopen-2022-070296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Cochlear implant (CI) and hearing aid (HA) in a bimodal solution (CI+HA) is compared with bilateral HAs (HA+HA) to test if the bimodal solution results in better speech intelligibility and self-reported quality of life. METHODS AND ANALYSIS This randomised controlled trial is conducted in Odense University Hospital, Denmark. Sixty adult bilateral HA users referred for CI surgery are enrolled if eligible and undergo: audiometry, speech perception in noise (HINT: Hearing in Noise Test), Speech Identification Scores and video head impulse test. All participants will receive new replacement HAs. After 1 month they will be randomly assigned (1:1) to the intervention group (CI+HA) or to the delayed intervention control group (HA+HA). The intervention group (CI+HA) will receive a CI on the ear with a poorer speech recognition score and continue using the HA on the other ear. The control group (HA+HA) will receive a CI after a total of 4 months of bilateral HA use.The primary outcome measures are speech intelligibility measured objectively with HINT (sentences in noise) and DANTALE I (words) and subjectively with the Speech, Spatial and Qualities of Hearing scale questionnaire. Secondary outcomes are patient reported Health-Related Quality of Life scores assessed with the Nijmegen Cochlear Implant Questionnaire, the Tinnitus Handicap Inventory and Dizziness Handicap Inventory. Third outcome is listening effort assessed with pupil dilation during HINT.In conclusion, the purpose is to improve the clinical decision-making for CI candidacy and optimise bimodal solutions. ETHICS AND DISSEMINATION This study protocol was approved by the Ethics Committee Southern Denmark project ID S-20200074G. All participants are required to sign an informed consent form.This study will be published on completion in peer-reviewed publications and scientific conferences. TRIAL REGISTRATION NUMBER NCT04919928.
Collapse
Affiliation(s)
- Yeliz Jakobsen
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| | | | - Jesper Hvass Schmidt
- Department of Oto-Rhino-Laryngology, Odense University Hospital, Odense C, Denmark
- Department of Audiology, Odense University Hospital, Odense C, Denmark
| |
Collapse
|
48
|
Gulli A, Fontana F, Orzan E, Aruffo A, Muzzi E. Spontaneous head movements support accurate horizontal auditory localization in a virtual visual environment. PLoS One 2022; 17:e0278705. [PMID: 36473012 PMCID: PMC9725155 DOI: 10.1371/journal.pone.0278705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study investigates the relationship between auditory localization accuracy in the horizontal plane and the spontaneous translation and rotation of the head in response to an acoustic stimulus from an invisible sound source. Although a number of studies have suggested that localization ability improves with head movements, most of them measured the perceived source elevation and front-back disambiguation. We investigated the contribution of head movements to auditory localization in the anterior horizontal field in normal hearing subjects. A virtual reality scenario was used to conceal visual cues during the test through a head mounted display. In this condition, we found that an active search of the sound origin using head movements is not strictly necessary, yet sufficient for achieving greater sound source localization accuracy. This result may have important implications in the clinical assessment and training of adults and children affected by hearing and motor impairments.
Collapse
Affiliation(s)
- Andrea Gulli
- HCI Lab, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
- * E-mail:
| | - Federico Fontana
- HCI Lab, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Eva Orzan
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| | - Alessandro Aruffo
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| | - Enrico Muzzi
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| |
Collapse
|
49
|
Steffens H, Schutte M, Ewert SD. Auditory orientation and distance estimation of sighted humans using virtual echolocation with artificial and self-generated sounds. JASA Express Lett 2022; 2:124403. [PMID: 36586958 DOI: 10.1121/10.0016403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Active echolocation of sighted humans using predefined synthetic and self-emitted sounds, as habitually used by blind individuals, was investigated. Using virtual acoustics, distance estimation and directional localization of a wall in different rooms were assessed. A virtual source was attached to either the head or hand with realistic or increased source directivity. A control condition was tested with a virtual sound source located at the wall. Untrained echolocation performance comparable to performance in the control condition was achieved on an individual level. On average, the echolocation performance was considerably lower than in the control condition, however, it benefitted from increased directivity.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| |
Collapse
|
50
|
Maldarelli G, Firzlaff U, Luksch H. Azimuthal sound localization in the chicken. PLoS One 2022; 17:e0277190. [PMID: 36413534 PMCID: PMC9681088 DOI: 10.1371/journal.pone.0277190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 10/21/2022] [Indexed: 11/23/2022] Open
Abstract
Sound localization is crucial for the survival and reproduction of animals, including non-auditory specialist animals such as the majority of avian species. The chicken (Gallus gallus) is a well-suited representative of a non-auditory specialist bird and several aspects of its auditory system have been well studied in the last decades. We conducted a behavioral experiment where 3 roosters performed a sound localization task with broad-band noise, using a 2-alternative forced choice paradigm. We determined the minimum audible angle (MAA) as measure for localization acuity. In general, our results compare to previous MAA measurements with hens in Go/NoGo tasks. The chicken has high localization acuity compared to other auditory generalist bird species tested so far. We found that chickens were better at localizing broadband noise with long duration (1 s; MAA = 16°) compared to brief duration (0.1 s; MAA = 26°). Moreover, the interaural difference in time of arrival and level (ITD and ILD, respectively) at these MAAs are comparable to what measured in other non-auditory specialist bird species, indicating that they might be sufficiently broad to be informative for azimuthal sound localization.
Collapse
Affiliation(s)
- Gianmarco Maldarelli
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan, Germany
- * E-mail:
| | - Uwe Firzlaff
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan, Germany
| | - Harald Luksch
- Chair of Zoology, School of Life Sciences, Technical University of Munich, Freising-Weihenstephan, Germany
| |
Collapse
|