1
|
Tetard S, Guigou C, Sonnet CE, Al Burshaid D, Charlery-Adèle A, Bozorg Grayeli A. Free-Field Hearing Test in Noise with Free Head Rotation for Evaluation of Monaural Hearing. J Clin Med 2023; 12:7143. [PMID: 38002755 PMCID: PMC10672306 DOI: 10.3390/jcm12227143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
There is a discrepancy between the hearing test results in patients with single-sided deafness (SSD) and their reported outcome measures. This is probably due to the presence of two elements in everyday situations: noise and head movements. We developed a stereo-audiometric test in noise with free head movements to evaluate movements and auditory performance in monaural and binaural conditions in normal hearing volunteers with one occluded ear. Tests were performed in the binaural condition (BIN), with the left ear (LEO) or the right ear occluded (REO). The signal was emitted by one of the seven speakers, placed every 30° in a semicircle, and the noise (cocktail party) by all speakers. Subjects turned their head freely to obtain the most comfortable listening position, then repeated 10 sentences in this position. In monaural conditions, the sums of rotations (head rotations for an optimal hearing position in degrees, random signal azimuth, 1 to 15 signal ad lib signal presentations) were higher (LEO 255 ± 212°, REO 308 ± 208° versus BIN 74 ± 76, p < 0.001, ANOVA) than those in the BIN condition and the discrimination score (out of 10) was lower than that in the BIN condition (LEO 5 ± 1, REO 7 ± 1 versus BIN 8 ± 1, respectively p < 0.001 and p < 0.05 ANOVA). In the monaural condition, total rotation and discrimination in noise were negatively correlated with difficulty (Pearson r = -0.68, p < 0.01 and -0.51, p < 0.05, respectively). Subjects' behaviors were different in optimizing their hearing in noise via head rotation. The evaluation of head movements seems to be a significant parameter in predicting the difficulty of monaural hearing in noisy environments.
Collapse
Affiliation(s)
- Stanley Tetard
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Caroline Guigou
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- ImViA, Laboratory of Imagery and Artificial Vision (EA 7535), Burgundy University, 21078 Dijon, France
| | - Charles-Edouard Sonnet
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- Amplifon Hearing Aid Center, 21000 Dijon, France
| | - Dhari Al Burshaid
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Ambre Charlery-Adèle
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- ImViA, Laboratory of Imagery and Artificial Vision (EA 7535), Burgundy University, 21078 Dijon, France
| |
Collapse
|
2
|
Olszanowski M, Frankowska N, Tołopiło A. "Rear bias" in spatial auditory perception: Attentional and affective vigilance to sounds occurring outside the visual field. Psychophysiology 2023; 60:e14377. [PMID: 37357967 DOI: 10.1111/psyp.14377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 05/11/2023] [Accepted: 05/14/2023] [Indexed: 06/27/2023]
Abstract
Presented studies explored the rear bias phenomenon, that is, the attentional and affective bias to sounds occurring behind the listener. Physiological and psychological reactions (i.e., fEMG, EDA/SCR, Simple Reaction Task-SRT, and self-assessments of affect-related states) were measured in response to tones of different frequencies (Study 1) and emotional vocalizations (Study 2) presented in rear and front spatial locations. Results showed that emotional vocalizations, when located in the back, facilitate reactions related to attention orientation (i.e., auricularis muscle response and simple reaction times) and evoke higher arousal-both physiological (as measured by SCR) and psychological (self-assessment scale). Importantly, observed asymmetries were larger for negative and threat-related signals (e.g., anger) than positive/nonthreatening ones (e.g., achievement). By contrast, there were only small differences for the relatively higher frequency tones. The observed relationships are discussed in terms of one of the postulated auditory system's functions, which is monitoring of the environment in order to quickly detect potential threats that occur outside of the visual field (e.g., behind one's back).
Collapse
Affiliation(s)
- Michal Olszanowski
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Natalia Frankowska
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| | - Aleksandra Tołopiło
- Center for Research on Biological Basis of Social Behavior, SWPS University, Warsaw, Poland
| |
Collapse
|
3
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
4
|
Hládek Ľ, Seeber BU. Speech Intelligibility in Reverberation is Reduced During Self-Rotation. Trends Hear 2023; 27:23312165231188619. [PMID: 37475460 PMCID: PMC10363862 DOI: 10.1177/23312165231188619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 06/23/2023] [Accepted: 07/02/2023] [Indexed: 07/22/2023] Open
Abstract
Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.
Collapse
Affiliation(s)
- Ľuboš Hládek
- Audio Information Processing, Technical University of Munich, Munich, Germany
| | - Bernhard U. Seeber
- Audio Information Processing, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
6
|
Sheffield SW, Wheeler HJ, Brungart DS, Bernstein JGW. The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment. Trends Hear 2023; 27:23312165231186040. [PMID: 37415497 PMCID: PMC10331332 DOI: 10.1177/23312165231186040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/13/2023] [Accepted: 06/17/2023] [Indexed: 07/08/2023] Open
Abstract
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| | - Harley J. Wheeler
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas S. Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| |
Collapse
|
7
|
Snapp HA, Millet B, Schaefer-Solle N, Rajguru SM, Ausili SA. The effects of hearing protection devices on spatial awareness in complex listening environments. PLoS One 2023; 18:e0280240. [PMID: 36634110 PMCID: PMC9836314 DOI: 10.1371/journal.pone.0280240] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 12/26/2022] [Indexed: 01/13/2023] Open
Abstract
Hearing protection devices (HPDs) remain the first line of defense against hazardous noise exposure and noise-induced hearing loss (NIHL). Despite the increased awareness of NIHL as a major occupational health hazard, implementation of effective hearing protection interventions remains challenging in at-risk occupational groups including those in public safety that provide fire, emergency medical, or law enforcement services. A reduction of situational awareness has been reported as a primary barrier to including HPDs as routine personal protective equipment. This study examined the effects of hearing protection and simulated NIHL on spatial awareness in ten normal hearing subjects. In a sound-attenuating booth and using a head-orientation tracker, speech intelligibility and localization accuracy were collected from these subjects under multiple listening conditions. Results demonstrate that the use of HPDs disrupts spatial hearing as expected, specifically localization performance and monitoring of speech signals. There was a significant interaction between hemifield and signal-to-noise ratio (SNR), with speech intelligibility significantly affected when signals were presented from behind at reduced SNR. Results also suggest greater spatial hearing disruption using over-the-ear HPDs when compared to the removal of high frequency cues typically associated with NIHL through low-pass filtering. These results are consistent with reduced situational awareness as a self-reported barrier to routine HPD use, and was evidenced in our study by decreased ability to make accurate decisions about source location in a controlled dual-task localization experiment.
Collapse
Affiliation(s)
- Hillary A. Snapp
- Department of Otolaryngology, University of Miami, Miami, FL, United States of America
- * E-mail:
| | - Barbara Millet
- Department of Interactive Media, University of Miami, Miami, FL, United States of America
| | | | - Suhrud M. Rajguru
- Department of Biomedical Engineering, University of Miami, Miami, FL, United States of America
| | - Sebastian A. Ausili
- Department of Otolaryngology, University of Miami, Miami, FL, United States of America
| |
Collapse
|
8
|
Gulli A, Fontana F, Orzan E, Aruffo A, Muzzi E. Spontaneous head movements support accurate horizontal auditory localization in a virtual visual environment. PLoS One 2022; 17:e0278705. [PMID: 36473012 PMCID: PMC9725155 DOI: 10.1371/journal.pone.0278705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study investigates the relationship between auditory localization accuracy in the horizontal plane and the spontaneous translation and rotation of the head in response to an acoustic stimulus from an invisible sound source. Although a number of studies have suggested that localization ability improves with head movements, most of them measured the perceived source elevation and front-back disambiguation. We investigated the contribution of head movements to auditory localization in the anterior horizontal field in normal hearing subjects. A virtual reality scenario was used to conceal visual cues during the test through a head mounted display. In this condition, we found that an active search of the sound origin using head movements is not strictly necessary, yet sufficient for achieving greater sound source localization accuracy. This result may have important implications in the clinical assessment and training of adults and children affected by hearing and motor impairments.
Collapse
Affiliation(s)
- Andrea Gulli
- HCI Lab, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
- * E-mail:
| | - Federico Fontana
- HCI Lab, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Eva Orzan
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| | - Alessandro Aruffo
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| | - Enrico Muzzi
- Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy
| |
Collapse
|
9
|
Hendrikse MME, Eichler T, Hohmann V, Grimm G. Self-motion with Hearing Impairment and (Directional) Hearing Aids. Trends Hear 2022; 26:23312165221078707. [PMID: 35341403 PMCID: PMC8966140 DOI: 10.1177/23312165221078707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When listening to a sound source in everyday situations, typical movement behavior is highly individual and may not result in the listener directly facing the sound source. Behavioral differences can affect the performance of directional algorithms in hearing aids, as was shown in previous work by using head movement trajectories of normal-hearing (NH) listeners in acoustic simulations for noise-suppression performance predictions. However, the movement behavior of hearing-impaired (HI) listeners with or without hearing aids may differ, and hearing-aid users might adapt their self-motion to improve the performance of directional algorithms. This work investigates the influence of hearing impairment on self-motion, and the interaction of hearing aids with self-motion. In order to do this, the self-motion of three HI participant groups—aided with an adaptive differential microphone (ADM), aided without ADM, and unaided—was measured and compared to previously measured self-motion data from younger and older NH participants. Self-motion was measured in virtual audiovisual environments (VEs) in the laboratory, and the signal-to-noise ratios (SNRs) and SNR improvement of the ADM resulting from the head movements of the participants were estimated using acoustic simulations. HI participants did almost all of the movement with their head and less with their eyes compared to NH participants, which led to a 0.3 dB increase in estimated SNR and to differences in estimated SNR improvement of the ADM. However, the self-motion of the HI participants aided with ADM was similar to that of other HI participants, indicating that the ADM did not cause listeners to adapt their self-motion.
Collapse
Affiliation(s)
- Maartje M E Hendrikse
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Theda Eichler
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Volker Hohmann
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Giso Grimm
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
10
|
Lu H, Brimijoin WO. Sound Source Selection Based on Head Movements in Natural Group Conversation. Trends Hear 2022; 26:23312165221097789. [PMID: 35477340 PMCID: PMC9058564 DOI: 10.1177/23312165221097789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
To optimally improve signal-to-noise ratio in noisy environments, a hearing assistance device must correctly identify what is signal and what is noise. Many of the biosignal-based approaches to solving this question are themselves subject to noise, but head angle is an overt behavior that may be possible to capture in practical devices in the real world. Previous orientation studies have demonstrated that head angle is systematically related to listening target; our study aimed to examine whether this relationship is sufficiently reliable to be used in group conversations where participants may be seated in different layouts and the listener is free to turn their body as well as their head. In addition to this simple method, we developed a source-selection algorithm based on a hidden Markov model (HMM) trained on listeners’ head movement. The performance of this model and the simple head-steering method was evaluated using publicly available behavioral data. Head angle during group conversation was predictive of active talker, exhibiting an undershoot with a slope consistent with that found in simple orientation studies, but the intercept of the linear relationship was different for different talker layouts, suggesting it would be problematic to rely exclusively on this information to predict the location of auditory attention. Provided the location of all target talkers is known, the HMM source selection model implemented here, however, showed significantly lower error in identifying listeners’ auditory attention than the linear head-steering method.
Collapse
Affiliation(s)
- Hao Lu
- Department of Psychology, 5635University of Minnesota, Minneapolis, MN, USA
| | | |
Collapse
|
11
|
Abstract
OBJECTIVES Current hearing aids have a limited bandwidth, which limits the intelligibility and quality of their output, and inhibits their uptake. Recent advances in signal processing, as well as novel methods of transduction, allow for a greater useable frequency range. Previous studies have shown a benefit for this extended bandwidth in consonant recognition, talker-sex identification, and separating sound sources. To explore whether there would be any direct spatial benefits to extending bandwidth, we used a dynamic localization method in a realistic situation. DESIGN Twenty-eight adult participants with minimal hearing loss reoriented themselves as quickly and accurately as comfortable to a new, off-axis near-field talker continuing a story in a background of far-field talkers of the same overall level in a simulated large room with common building materials. All stimuli were low-pass filtered at either 5 or 10 kHz on each trial. To further simulate current hearing aids, participants wore microphones above the pinnae and insert earphones adjusted to provide a linear, zero-gain response. RESULTS Each individual trajectory was recorded with infra-red motion-tracking and analyzed for accuracy, duration, start time, peak velocity, peak velocity time, complexity, reversals, and misorientations. Results across listeners showed a significant increase in peak velocity and significant decrease in start and peak velocity time with greater (10 kHz) bandwidth. CONCLUSIONS These earlier, swifter orientations demonstrate spatial benefits beyond static localization accuracy in plausible conditions; extended bandwidth without pinna cues provided more salient cues in a realistic mixture of talkers.
Collapse
|
12
|
Single-Sided Deafness Cochlear Implant Sound-Localization Behavior With Multiple Concurrent Sources. Ear Hear 2021; 43:206-219. [PMID: 34320529 DOI: 10.1097/aud.0000000000001089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments. This study examined SSD-CI sound localization in a complex scenario where a target sound was added to or removed from a mixture of other environmental sounds, while tracking head movements to assess behavioral strategy. DESIGN Eleven CI users with normal hearing or moderate hearing loss in the contralateral ear completed a sound-localization task in monaural (CI-OFF) and bilateral (CI-ON) configurations. Ten of the listeners were also tested before CI activation to examine longitudinal effects. Two-second environmental sound samples, looped to create 4- or 10-sec trials, were presented in a spherical array of 26 loudspeakers encompassing ±144° azimuth and ±30° elevation at a 1-m radius. The target sound was presented alone (localize task) or concurrently with one or three additional sources presented to different loudspeakers, with the target cued by being added to (Add) or removed from (Rem) the mixture after 6 sec. A head-mounted tracker recorded movements in six dimensions (three for location, three for orientation). Mixed-model regression was used to examine target sound-identification accuracy, localization accuracy, and head movement. Angular and translational head movements were analyzed both before and after the target was switched on or off. RESULTS Listeners showed improved localization accuracy in the CI-ON configuration, but there was no interaction with test condition and no effect of the CI on sound-identification performance. Although high-frequency hearing loss in the unimplanted ear reduced localization accuracy and sound-identification performance, the magnitude of the CI localization benefit was independent of hearing loss. The CI reduced the magnitude of gross head movements used during the task in the azimuthal rotation and translational dimensions, both while the target sound was present (in all conditions) and during the anticipatory period before the target was switched on (in the Add condition). There was no change in pre- versus post-activation CI-OFF performance. CONCLUSIONS These results extend previous findings, demonstrating a CI localization benefit in a complex listening scenario that includes environmental and behavioral elements encountered in everyday listening conditions. The CI also reduced the magnitude of gross head movements used to perform the task. This was the case even before the target sound was added to the mixture. This suggests that a CI can reduce the need for physical movement both in anticipation of an upcoming sound event and while actively localizing the target sound. Overall, these results show that for SSD listeners, a CI can improve localization in a complex sound environment and reduce the amount of physical movement used.
Collapse
|
13
|
Coudert A, Gaveau V, Gatel J, Verdelet G, Salemme R, Farne A, Pavani F, Truy E. Spatial Hearing Difficulties in Reaching Space in Bilateral Cochlear Implant Children Improve With Head Movements. Ear Hear 2021; 43:192-205. [PMID: 34225320 PMCID: PMC8694251 DOI: 10.1097/aud.0000000000001090] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Supplemental Digital Content is available in the text. The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities.
Collapse
Affiliation(s)
- Aurélie Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, Lyon, France Department of Pediatric Otolaryngology-Head & Neck Surgery, Femme Mere Enfant Hospital, Hospices Civils de Lyon, Lyon, France Department of Otolaryngology-Head & Neck Surgery, Edouard Herriot Hospital, Hospices Civils de Lyon, Lyon, France University of Lyon 1, Lyon, France Hospices Civils de Lyon, Neuro-immersion Platform, Lyon, France Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | | | | | | | | | | | | | | |
Collapse
|
14
|
Macaulay EJ, Hartmann WM. Localization of tones in a room by moving listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:4159. [PMID: 34241422 DOI: 10.1121/10.0005045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Accepted: 05/04/2021] [Indexed: 06/13/2023]
Abstract
It is difficult to localize the source of a tone in a room because standing waves lead to complicated interaural differences that become uninterpretable localization cues. This paper tests the conjecture that localization improves if the listener can move to explore the complicated sound field over space and time. Listener head and torso movements were free and uninstructed. Experiments at low and high frequencies with eight human listeners in a relatively dry room indicated some modest improvement when listeners were allowed to move, especially at high frequencies. The experiments sought to understand listener dynamic localization strategies in detail. Head position and orientation were tracked electronically, and ear-canal signals were recorded throughout the 9 s of each moving localization trial. The availability of complete physical information enabled the testing of two model strategies: (1) relative null strategy, using instantaneous zeros of the listener-related source angle; and (2) inferred source strategy, using a continuum of apparent source locations implied by the listener's instantaneous forward direction and listener-related source angle. The predicted sources were given weights determined by the listener motion. Both models were statistically successful in coping with a great variety of listener motions and temporally evolving cues.
Collapse
Affiliation(s)
- Eric J Macaulay
- Department of Physics and Astronomy Michigan State University, 567 Wilson Rd., East Lansing, Michigan, 48824, USA
| | - William M Hartmann
- Department of Physics and Astronomy Michigan State University, 567 Wilson Rd., East Lansing, Michigan, 48824, USA
| |
Collapse
|
15
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
16
|
|
17
|
Bell L, Scharke W, Reindl V, Fels J, Neuschaefer-Rube C, Konrad K. Auditory and Visual Response Inhibition in Children with Bilateral Hearing Aids and Children with ADHD. Brain Sci 2020; 10:E307. [PMID: 32443468 PMCID: PMC7287647 DOI: 10.3390/brainsci10050307] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 12/18/2022] Open
Abstract
Children fitted with hearing aids (HAs) and children with attention deficit/hyperactivity disorder (ADHD) often have marked difficulties concentrating in noisy environments. However, little is known about the underlying neural mechanism of auditory and visual attention deficits in a direct comparison of both groups. The current functional near-infrared spectroscopy (fNIRS) study was the first to investigate the behavioral performance and neural activation during an auditory and a visual go/nogo paradigm in children fitted with bilateral HAs, children with ADHD and typically developing children (TDC). All children reacted faster, but less accurately, to visual than auditory stimuli, indicating a sensory-specific response inhibition efficiency. Independent of modality, children with ADHD and children with HAs reacted faster and tended to show more false alarms than TDC. On a neural level, however, children with ADHD showed supra-modal neural alterations, particularly in frontal regions. On the contrary, children with HAs exhibited modality-dependent alterations in the right temporopolar cortex. Higher activation was observed in the auditory than in the visual condition. Thus, while children with ADHD and children with HAs showed similar behavioral alterations, different neural mechanisms might underlie these behavioral changes. Future studies are warranted to confirm the current findings with larger samples. To this end, fNIRS provided a promising tool to differentiate the neural mechanisms underlying response inhibition deficits between groups and modalities.
Collapse
Affiliation(s)
- Laura Bell
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, D-52074 Aachen, Germany; (W.S.); (V.R.); (K.K.)
| | - Wolfgang Scharke
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, D-52074 Aachen, Germany; (W.S.); (V.R.); (K.K.)
- Institute of Cognitive and Experimental Psychology, RWTH Aachen University, D-52074 Aachen, Germany
| | - Vanessa Reindl
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, D-52074 Aachen, Germany; (W.S.); (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging (INM-11), RWTH Aachen & Research Centre Juelich, D-52428 Juelich, Germany
| | - Janina Fels
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, D-52074 Aachen, Germany;
| | - Christiane Neuschaefer-Rube
- Clinic of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, D-52074 Aachen, Germany;
| | - Kerstin Konrad
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, D-52074 Aachen, Germany; (W.S.); (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging (INM-11), RWTH Aachen & Research Centre Juelich, D-52428 Juelich, Germany
| |
Collapse
|
18
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
19
|
Hendrikse MME, Grimm G, Hohmann V. Evaluation of the Influence of Head Movement on Hearing Aid Algorithm Performance Using Acoustic Simulations. Trends Hear 2020; 24:2331216520916682. [PMID: 32270755 PMCID: PMC7153187 DOI: 10.1177/2331216520916682] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 02/12/2020] [Accepted: 03/12/2020] [Indexed: 11/15/2022] Open
Abstract
Head movements can improve sound localization performance and speech intelligibility in acoustic environments with spatially distributed sources. However, they can affect the performance of hearing aid algorithms, when adaptive algorithms have to adjust to changes in the acoustic scene caused by head movement (the so-called maladaptation effect) or when directional algorithms are not facing in the optimal direction because the head has moved away (the so-called misalignment effect). In this article, we investigated the mechanisms behind these maladaptation and misalignment effects for a set of six standard hearing aid algorithms using acoustic simulations based on premade databases; this was done so we could study the effects as carefully as possible. Experiment 1 investigated the maladaptation effect by analyzing hearing aid benefit after simulated rotational head movement in simple anechoic noise scenarios. The effects of movement parameters (start angle and peak velocity), noise scenario complexity, and adaptation time were studied, as well as the recovery time of the algorithms. However, a significant maladaptation effect was only found in the most unrealistic anechoic scenario with one noise source. Experiment 2 investigated the effects of maladaptation and misalignment using previously recorded natural head movements in acoustic scenes resembling everyday life situations. In line with the results of Experiment 1, no effect of maladaptation was found in these more realistic acoustic scenes. However, a significant effect of misalignment on the performance of directional algorithms was found. This demonstrates the need to take head movement into account in the evaluation of directional hearing aid algorithms.
Collapse
Affiliation(s)
- Maartje M. E. Hendrikse
- Auditory Signal Processing and Cluster of Excellence “Hearing4all’, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Giso Grimm
- Auditory Signal Processing and Cluster of Excellence “Hearing4all’, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Volker Hohmann
- Auditory Signal Processing and Cluster of Excellence “Hearing4all’, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
20
|
Hendrikse MME, Llorach G, Hohmann V, Grimm G. Movement and Gaze Behavior in Virtual Audiovisual Listening Environments Resembling Everyday Life. Trends Hear 2019; 23:2331216519872362. [PMID: 32516060 PMCID: PMC6732870 DOI: 10.1177/2331216519872362] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Accepted: 08/05/2019] [Indexed: 11/25/2022] Open
Abstract
Recent achievements in hearing aid development, such as visually guided hearing aids, make it increasingly important to study movement behavior in everyday situations in order to develop test methods and evaluate hearing aid performance. In this work, audiovisual virtual environments (VEs) were designed for communication conditions in a living room, a lecture hall, a cafeteria, a train station, and a street environment. Movement behavior (head movement, gaze direction, and torso rotation) and electroencephalography signals were measured in these VEs in the laboratory for 22 younger normal-hearing participants and 19 older normal-hearing participants. These data establish a reference for future studies that will investigate the movement behavior of hearing-impaired listeners and hearing aid users for comparison. Questionnaires were used to evaluate the subjective experience in the VEs. A test-retest comparison showed that the measured movement behavior is reproducible and that the measures of movement behavior used in this study are reliable. Moreover, evaluation of the questionnaires indicated that the VEs are sufficiently realistic. The participants rated the experienced acoustic realism of the VEs positively, and although the rating of the experienced visual realism was lower, the participants felt to some extent present and involved in the VEs. Analysis of the movement data showed that movement behavior depends on the VE and the age of the subject and is predictable in multitalker conversations and for moving distractors. The VEs and a database of the collected data are publicly available.
Collapse
Affiliation(s)
| | - Gerard Llorach
- Medizinische Physik and Cluster of
Excellence ‘Hearing4all’, Universität Oldenburg, Germany
- Hörzentrum Oldenburg GmbH, Germany
| | - Volker Hohmann
- Medizinische Physik and Cluster of
Excellence ‘Hearing4all’, Universität Oldenburg, Germany
- Hörzentrum Oldenburg GmbH, Germany
| | - Giso Grimm
- Medizinische Physik and Cluster of
Excellence ‘Hearing4all’, Universität Oldenburg, Germany
| |
Collapse
|
21
|
|
22
|
Brimijoin WO, Akeroyd MA. The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location. J Am Acad Audiol 2018; 27:588-600. [PMID: 27406664 DOI: 10.3766/jaaa.15101] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. PURPOSE To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues. RESEARCH DESIGN We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. STUDY SAMPLE We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. DATA COLLECTION AND ANALYSIS Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. RESULTS Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. CONCLUSIONS Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, Glasgow, UK
| | | |
Collapse
|
23
|
Archer-Boyd AW, Holman JA, Brimijoin WO. The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids. Hear Res 2017; 357:64-72. [PMID: 29223929 PMCID: PMC5759949 DOI: 10.1016/j.heares.2017.11.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 11/20/2017] [Accepted: 11/26/2017] [Indexed: 11/28/2022]
Abstract
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB. Investigated the minimum signal-to-noise ratio (SNR) required to localize a target. Head movement to targets at varying SNRs and locations was measured. Orienting towards a new off-axis target became difficult below −6 dB SNR. An ideal directional microphone should not attenuate off-axis sources by > 12 dB.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK; MRC Cognition & Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Jack A Holman
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| | - W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| |
Collapse
|
24
|
Brungart DS, Cohen JI, Zion D, Romigh G. The localization of non-individualized virtual sounds by hearing impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2870. [PMID: 28464685 DOI: 10.1121/1.4979462] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Although many studies have evaluated the performance of virtual audio displays with normal hearing listeners, very little information is available on the effect that hearing loss has on the localization of virtual sounds. In this study, normal hearing (NH) and hearing impaired (HI) listeners were asked to localize noise stimuli with short (250 ms), medium (1000 ms), and long (4000 ms) durations both in the free field and with a non-individualized head-tracked virtual audio display. The results show that the HI listeners localized sounds less accurately than the NH listeners, and that both groups consistently localized virtual sounds less accurately than free-field sounds. These results indicate that HI listeners are sensitive to individual differences in head related transfer functions (HRTFs), which means that they might have difficulty using auditory display systems that rely on generic HRTFs to control the apparent locations of virtual sounds. However, the results also reveal a high correlation between free-field and virtual localization performance in the HI listeners. This suggests that it may be feasible to use non-individualized virtual audio display systems to predict the auditory localization performance of HI listeners in clinical environments where free-field speaker arrays are not available.
Collapse
Affiliation(s)
- Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 8901 Wisconsin Avenue, Bethesda, Maryland 20889, USA
| | - Julie I Cohen
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, 6720A Rockledge Drive, Bethesda, Maryland 20817, USA
| | - Danielle Zion
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 8901 Wisconsin Avenue, Bethesda, Maryland 20889, USA
| | - Griffin Romigh
- Air Force Research Laboratory, 2610 Seventh Street, Wright Patterson Air Force Base, Ohio 45433, USA
| |
Collapse
|
25
|
Pavani F, Venturini M, Baruffaldi F, Artesini L, Bonfioli F, Frau GN, van Zoest W. Spatial and non-spatial multisensory cueing in unilateral cochlear implant users. Hear Res 2017; 344:24-37. [DOI: 10.1016/j.heares.2016.10.025] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 10/21/2016] [Accepted: 10/27/2016] [Indexed: 11/30/2022]
|
26
|
Leung MY, Lo J, Leung YY. Accuracy of Different Modalities to Record Natural Head Position in 3 Dimensions: A Systematic Review. J Oral Maxillofac Surg 2016; 74:2261-2284. [PMID: 27235181 DOI: 10.1016/j.joms.2016.04.022] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Revised: 04/18/2016] [Accepted: 04/18/2016] [Indexed: 11/26/2022]
Abstract
PURPOSE Three-dimensional (3D) images are taken with positioning devices to ensure a patient's stability, which, however, place the patient's head into a random orientation. Reorientation of images to the natural head position (NHP) is necessary for appropriate assessment of dentofacial deformities before any surgical planning. The aim of this study was to review the literature systematically to identify and evaluate the various modalities available to record the NHP in 3 dimensions and to compare their accuracy. MATERIALS AND METHODS A systematic literature search of the PubMed, Cochrane Library and Embase databases, with no limitations on publication time or language, was performed in July 2015. The search and evaluations of articles were performed in 4 rounds. The methodologies, accuracies, advantages, and limitations of various modalities to record NHP were examined. RESULTS Eight articles were included in the final review. Six modalities to record NHP were identified, namely 1) stereophotogrammetry, 2) facial markings along laser lines, 3) clinical photographs and the pose from orthography and scaling with iterations (POSIT) algorithm, 4) digital orientation sensing, 5) handheld 3D camera measuring system, and 6) laser scanning. Digital orientation sensing had good accuracy, with mean angular differences from the reference within 1° (0.07 ± 0.49° and 0.12 ± 0.54°, respectively). Laser scanning was shown to be comparable to digital orientation sensing. The method involving clinical photographs and the POSIT algorithm was reported to have good accuracy, with mean angular differences for pitch, roll, and yaw within 1° (-0.17 ± 0.50°). Stereophotogrammetry was reported to have the highest reliability, with mean angular deviations in pitch, roll, and yaw for active and passive stereophotogrammetric devices within 0.1° (0.004771 ± 0.045645° and 0.007572 ± 0.079088°, respectively). CONCLUSIONS This systematic review showed that recording the NHP in 3 dimensions with a digital orientation sensor has good accuracy. Laser scanning was found to have comparable accuracy to digital orientation sensing, but routine clinical use was limited by its high cost and low portability. Stereophotogrammetry and the method using a single clinical photograph and the POSIT algorithm were potential alternatives. Nevertheless, clinical trials are needed to verify their applications in patients. Preferably, digital orientation sensor should be used as a reference for comparison with new proposed methods of recording the NHP in future research.
Collapse
Affiliation(s)
- Ming Yin Leung
- Resident, Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong, China
| | - John Lo
- Honorary Associate Professor, Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong, China
| | - Yiu Yan Leung
- Clinical Assistant Professor, Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
27
|
Abstract
Objectives: Although directional microphones on a hearing aid provide a signal-to-noise ratio benefit in a noisy background, the amount of benefit is dependent on how close the signal of interest is to the front of the user. It is assumed that when the signal of interest is off-axis, users can reorient themselves to the signal to make use of the directional microphones to improve signal-to-noise ratio. The present study tested this assumption by measuring the head-orienting behavior of bilaterally fit hearing-impaired individuals with their microphones set to omnidirectional and directional modes. The authors hypothesized that listeners using directional microphones would have greater difficulty in rapidly and accurately orienting to off-axis signals than they would when using omnidirectional microphones. Design: The authors instructed hearing-impaired individuals to turn and face a female talker in simultaneous surrounding male-talker babble. Participants pressed a button when they felt they were accurately oriented in the direction of the female talker. Participants completed three blocks of trials with their hearing aids in omnidirectional mode and three blocks in directional mode, with mode order randomized. Using a Vicon motion tracking system, the authors measured head position and computed fixation error, fixation latency, trajectory complexity, and proportion of misorientations. Results: Results showed that for larger off-axis target angles, listeners using directional microphones took longer to reach their targets than they did when using omnidirectional microphones, although they were just as accurate. They also used more complex movements and frequently made initial turns in the wrong direction. For smaller off-axis target angles, this pattern was reversed, and listeners using directional microphones oriented more quickly and smoothly to the targets than when using omnidirectional microphones. Conclusions: The authors argue that an increase in movement complexity indicates a switch from a simple orienting movement to a search behavior. For the most off-axis target angles, listeners using directional microphones appear to not know which direction to turn, so they pick a direction at random and simply rotate their heads until the signal becomes more audible. The changes in fixation latency and head orientation trajectories suggest that the decrease in off-axis audibility is a primary concern in the use of directional microphones, and listeners could experience a loss of initial target speech while turning toward a new signal of interest. If hearing-aid users are to receive maximum directional benefit in noisy environments, both adaptive directionality in hearing aids and clinical advice on using directional microphones should take head movement and orientation behavior into account.
Collapse
|
28
|
Brimijoin WO, Akeroyd MA. The moving minimum audible angle is smaller during self motion than during source motion. Front Neurosci 2014; 8:273. [PMID: 25228856 PMCID: PMC4151253 DOI: 10.3389/fnins.2014.00273] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 08/12/2014] [Indexed: 11/17/2022] Open
Abstract
We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system—in a manner not unlike the vestibulo-ocular reflex—works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create “head-stabilized” signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this “self-motion” condition we measured MMAA in a second “source-motion” condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1–2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues.
Collapse
Affiliation(s)
- W Owen Brimijoin
- Scottish Section, Institute of Hearing Research, Medical Research Council/Chief Scientist Office Glasgow, UK
| | - Michael A Akeroyd
- Scottish Section, Institute of Hearing Research, Medical Research Council/Chief Scientist Office Glasgow, UK
| |
Collapse
|
29
|
Mueller MF, Meisenbacher K, Lai WK, Dillier N. Sound localization with bilateral cochlear implants in noise: how much do head movements contribute to localization? Cochlear Implants Int 2013; 15:36-42. [PMID: 23684420 DOI: 10.1179/1754762813y.0000000040] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Bilateral cochlear implant (CI) users encounter difficulties in localizing sound sources in everyday environments, especially in the presence of background noise and reverberation. They tend to show large directional errors and front-back confusions compared to normal hearing (NH) subjects in the same conditions. In this study, the ability of bilateral CI users to use head movements to improve sound source localization was evaluated. Speech sentences of 0.5, 2, and 4.5 seconds were presented in noise to the listeners in conditions with and without head movements. The results show that for middle and long signal durations, the CI users could significantly reduce the number of front-back confusions. The angular accuracy, however, did not improve. Analysis of head trajectories showed that the CI users had great difficulties in moving their head towards the position of the source, whereas the NH listeners targeted the source loudspeaker correctly.
Collapse
|
30
|
Ibrahim I, Parsa V, Macpherson E, Cheesman M. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology. Audiol Res 2012; 3:e1. [PMID: 26557339 PMCID: PMC4627128 DOI: 10.4081/audiores.2013.e1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 10/15/2012] [Accepted: 11/19/2012] [Indexed: 11/23/2022] Open
Abstract
Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.
Collapse
Affiliation(s)
- Iman Ibrahim
- Faculty of Health Sciences, Western University , London, Canada
| | - Vijay Parsa
- National Centre for Audiology, Western University , London, Canada
| | - Ewan Macpherson
- National Centre for Audiology, Western University , London, Canada
| | | |
Collapse
|
31
|
Brimijoin WO, Akeroyd MA. The role of head movements and signal spectrum in an auditory front/back illusion. Iperception 2012; 3:179-82. [PMID: 23145279 PMCID: PMC3485843 DOI: 10.1068/i7173sas] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2011] [Revised: 03/07/2012] [Indexed: 12/02/2022] Open
Abstract
We used a dynamic auditory spatial illusion to investigate the role of self-motion and acoustics in shaping our spatial percept of the environment. Using motion capture, we smoothly moved a sound source around listeners as a function of their own head movements. A lowpass filtered sound behind a listener that moved in the direction it would have moved if it had been located in the front was perceived as statically located in front. The contrariwise effect occurred if the sound was in front but moved as if it were behind. The illusion was strongest for sounds lowpass filtered at 500 Hz and weakened as a function of increasing lowpass cut-off frequency. The signals with the most high frequency energy were often associated with an unstable location percept that flickered from front to back as self-motion cues and spectral cues for location came into conflict with one another.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 16 Alexandra Parade, Glasgow G31 2ER, UK;
| | | |
Collapse
|
32
|
Abstracts of the British Society of Audiology annual conference (incorporating the Experimental and Clinical Short papers meetings). Int J Audiol 2012. [DOI: 10.3109/14992027.2012.653103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
33
|
Brimijoin WO, McShefferty D, Akeroyd MA. Undirected head movements of listeners with asymmetrical hearing impairment during a speech-in-noise task. Hear Res 2011; 283:162-8. [PMID: 22079774 PMCID: PMC3315013 DOI: 10.1016/j.heares.2011.10.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2011] [Revised: 10/27/2011] [Accepted: 10/28/2011] [Indexed: 10/29/2022]
Abstract
It has long been understood that the level of a sound at the ear is dependent on head orientation, but the way in which listeners move their heads during listening has remained largely unstudied. Given the task of understanding a speech signal in the presence of a simultaneous noise, listeners could potentially use head orientation to either maximize the level of the signal in their better ear, or to maximize the signal-to-noise ratio in their better ear. To establish what head orientation strategy listeners use in a speech comprehension task, we used an infrared motion-tracking system to measure the head movements of 36 listeners with large (>16 dB) differences in hearing threshold between their left and right ears. We engaged listeners in a difficult task of understanding sentences presented at the same time as a spatially separated background noise. We found that they tended to orient their heads so as to maximize the level of the target sentence in their better ear, irrespective of the position of the background noise. This is not ideal orientation behavior from the perspective of maximizing the signal-to-noise ratio (SNR) at the ear, but is a simple, easily implemented strategy that is often effective in an environment where the spatial position of multiple noise sources may be difficult or impossible to determine.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 16 Alexandra Parade, Glasgow G31 2ER, UK.
| | | | | |
Collapse
|
34
|
Song W, Ellermeier W, Hald J. Psychoacoustic evaluation of multichannel reproduced sounds using binaural synthesis and spherical beamforming. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:2063-2075. [PMID: 21973361 DOI: 10.1121/1.3628323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The binaural auralization of a 3D sound field using spherical-harmonics beamforming (SHB) techniques was investigated and compared with the traditional method using a head-and-torso simulator (HATS). The new procedure was verified by comparing simulated room impulse responses with measured ones binaurally. The objective comparisons show that there is good agreement in the frequency range between 0.1 and 6.4 kHz. A listening experiment was performed to validate the SHB method subjectively and to compare it to the HATS method. Two musical excerpts, pop and classical, were used. Subjective responses were collected in two head rotation conditions (fixed and rotating) and six spatial reproduction modes, including phantom mono, stereo, and surround sound. The results show that subjective scales of width, spaciousness, and preference based on the SHB method were similar to those obtained for the HATS method, although the width and spaciousness of the stimuli processed by the SHB method were judged slightly higher than the ones using the HATS method in general. Thus, binaural synthesis using SHB may be a useful tool to reproduce a 3D sound field binaurally, while saving considerably on measurement time because head rotation can be simulated based on a single recording.
Collapse
Affiliation(s)
- Wookeun Song
- Brüel & Kjær Sound & Vibration Measurement A/S, Skodsborgvej 307, 2850 Nærum, Denmark.
| | | | | |
Collapse
|
35
|
Akeroyd MA, Guy FH. The effect of hearing impairment on localization dominance for single-word stimuli. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:312-23. [PMID: 21786901 PMCID: PMC3515009 DOI: 10.1121/1.3598466] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Localization dominance (one of the phenomena of the "precedence effect") was measured in a large number of normal-hearing and hearing-impaired individuals and related to self-reported difficulties in everyday listening. The stimuli (single words) were made-up of a "lead" followed 4 ms later by a equal-level "lag" from a different direction. The stimuli were presented from a circular ring of loudspeakers, either in quiet or in a background of spatially diffuse babble. Listeners were required to identify the loudspeaker from which they heard the sound. Localization dominance was quantified by the weighting factor c [B.G. Shinn-Cunningham et al., J. Acoust. Soc. Am. 93, 2923-2932 (1993)]. The results demonstrated large individual differences: Some listeners showed near-perfect localization dominance (c near 1) but many showed a much reduced effect. Two-thirds (64/93) of the listeners gave a value of c of at least 0.75. There was a significant correlation with hearing loss, such that better hearing listeners showed better localization dominance. One of the items of the self-report questionnaire ("Do you have the impression of sounds being exactly where you would expect them to be?") showed a significant correlation with the experimental results. This suggests that reductions in localization dominance may affect everyday auditory perception.
Collapse
Affiliation(s)
- Michael A Akeroyd
- MRC Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, Alexandra Parade, Glasgow G31 2ER, United Kingdom.
| | | |
Collapse
|