1
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024:00003446-990000000-00262. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
2
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
3
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. bioRxiv 2024:2024.01.08.574475. [PMID: 38260458 PMCID: PMC10802496 DOI: 10.1101/2024.01.08.574475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
How we move our bodies affects how we perceive sound. For instance, we can explore an environment to seek out the source of a sound and we can use head movements to compensate for hearing loss. How we do this is not well understood because many auditory experiments are designed to limit head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. We then asked how auditory behavior was affected by hearing loss induced by surgical removal of the malleus from the middle ear. An innate behavior, the auditory startle response, was abolished by bilateral hearing loss and unaffected by unilateral hearing loss. Similarly, performance on the sound-seeking task drastically declined after bilateral hearing loss and did not recover. In striking contrast, mice with unilateral hearing loss were only transiently impaired on sound-seeking; over a recovery period of about a week, they regained high levels of performance, increasingly reliant on a different spatial sampling strategy. Thus, even in the face of permanent unilateral damage to the peripheral auditory system, mice recover their ability to perform a naturalistic sound-seeking task. This paradigm provides an opportunity to examine how body movement enables better hearing and resilient adaptation to sensory deprivation.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta GA 30322
| | | | | | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
| | - Kaitlyn A Brooks
- Department of Otolaryngology - Head and Neck Surgery, Emory University School of Medicine, Atlanta GA 30308
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta GA 30322
- Department of Biology, Emory College of Arts and Sciences, Atlanta GA 30322
| |
Collapse
|
4
|
Miura T, Okochi N, Suzuki J, Ifukube T. Binaural Listening with Head Rotation Helps Persons with Blindness Perceive Narrow Obstacles. Int J Environ Res Public Health 2023; 20:ijerph20085573. [PMID: 37107855 PMCID: PMC10138724 DOI: 10.3390/ijerph20085573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 03/31/2023] [Accepted: 04/11/2023] [Indexed: 05/11/2023]
Abstract
Orientation and mobility (O&M) are important abilities that people with visual impairments use in their independent performance of daily activities. In orientation, people with total blindness pinpoint nonsounding objects and sounding objects. The ability to perceive nonsounding objects is called obstacle sense, wherein people with blindness recognize the various characteristics of an obstacle using acoustic cues. Although body movement and listening style may enhance the sensing of obstacles, experimental studies on this topic are lacking. Elucidating their contributions to obstacle sense may lead to the further systematization of techniques of O&M training. This study sheds light on the contribution of head rotation and binaural hearing to obstacle sense among people with blindness. We conducted an experiment on the perceived presence and distance of nonsounding obstacles, which varied width and distance, for participants with blindness under the conditions of binaural or monaural hearing, with or without head rotation. The results indicated that head rotation and binaural listening can enhance the localization of nonsounding obstacles. Further, when people with blindness are unable to perform head rotation or use binaural hearing, their judgment can become biased in favor of the presence of an obstacle due to risk avoidance.
Collapse
Affiliation(s)
- Takahiro Miura
- National Institute of Advanced Industrial Science and Technology (AIST), Kashiwa 277-0882, Japan
- Correspondence:
| | - Naoyuki Okochi
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo 153-8904, Japan
| | | | - Tohru Ifukube
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo 153-8904, Japan
| |
Collapse
|
5
|
Alzaher M, Valzolgher C, Verdelet G, Pavani F, Farnè A, Barone P, Marx M. Audiovisual Training in Virtual Reality Improves Auditory Spatial Adaptation in Unilateral Hearing Loss Patients. J Clin Med 2023; 12:2357. [PMID: 36983357 PMCID: PMC10058351 DOI: 10.3390/jcm12062357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/22/2023] Open
Abstract
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.
Collapse
|
6
|
McLachlan G, Majdak P, Reijniers J, Mihocic M, Peremans H. Dynamic spectral cues do not affect human sound localization during small head movements. Front Neurosci 2023; 17:1027827. [PMID: 36816108 PMCID: PMC9936143 DOI: 10.3389/fnins.2023.1027827] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 01/16/2023] [Indexed: 02/05/2023] Open
Abstract
Natural listening involves a constant deployment of small head movement. Spatial listening is facilitated by head movements, especially when resolving front-back confusions, an otherwise common issue during sound localization under head-still conditions. The present study investigated which acoustic cues are utilized by human listeners to localize sounds using small head movements (below ±10° around the center). Seven normal-hearing subjects participated in a sound localization experiment in a virtual reality environment. Four acoustic cue stimulus conditions were presented (full spectrum, flattened spectrum, frozen spectrum, free-field) under three movement conditions (no movement, head rotations over the yaw axis and over the pitch axis). Localization performance was assessed using three metrics: lateral and polar precision error and front-back confusion rate. Analysis through mixed-effects models showed that even small yaw rotations provide a remarkable decrease in front-back confusion rate, whereas pitch rotations did not show much of an effect. Furthermore, MSS cues improved localization performance even in the presence of dITD cues. However, performance was similar between stimuli with and without dMSS cues. This indicates that human listeners utilize the MSS cues before the head moves, but do not rely on dMSS cues to localize sounds when utilizing small head movements.
Collapse
Affiliation(s)
- Glen McLachlan
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium,*Correspondence: Glen McLachlan ✉
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Jonas Reijniers
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium
| | - Michael Mihocic
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Herbert Peremans
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
7
|
Valzolgher C, Gatel J, Bouzaid S, Grenouillet S, Todeschini M, Verdelet G, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Reaching to Sounds Improves Spatial Hearing in Bilateral Cochlear Implant Users. Ear Hear 2023; 44:189-98. [PMID: 35982520 DOI: 10.1097/AUD.0000000000001267] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
8
|
Snapp HA, Millet B, Schaefer-Solle N, Rajguru SM, Ausili SA. The effects of hearing protection devices on spatial awareness in complex listening environments. PLoS One 2023; 18:e0280240. [PMID: 36634110 PMCID: PMC9836314 DOI: 10.1371/journal.pone.0280240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 12/26/2022] [Indexed: 01/13/2023] Open
Abstract
Hearing protection devices (HPDs) remain the first line of defense against hazardous noise exposure and noise-induced hearing loss (NIHL). Despite the increased awareness of NIHL as a major occupational health hazard, implementation of effective hearing protection interventions remains challenging in at-risk occupational groups including those in public safety that provide fire, emergency medical, or law enforcement services. A reduction of situational awareness has been reported as a primary barrier to including HPDs as routine personal protective equipment. This study examined the effects of hearing protection and simulated NIHL on spatial awareness in ten normal hearing subjects. In a sound-attenuating booth and using a head-orientation tracker, speech intelligibility and localization accuracy were collected from these subjects under multiple listening conditions. Results demonstrate that the use of HPDs disrupts spatial hearing as expected, specifically localization performance and monitoring of speech signals. There was a significant interaction between hemifield and signal-to-noise ratio (SNR), with speech intelligibility significantly affected when signals were presented from behind at reduced SNR. Results also suggest greater spatial hearing disruption using over-the-ear HPDs when compared to the removal of high frequency cues typically associated with NIHL through low-pass filtering. These results are consistent with reduced situational awareness as a self-reported barrier to routine HPD use, and was evidenced in our study by decreased ability to make accurate decisions about source location in a controlled dual-task localization experiment.
Collapse
Affiliation(s)
- Hillary A. Snapp
- Department of Otolaryngology, University of Miami, Miami, FL, United States of America
- * E-mail:
| | - Barbara Millet
- Department of Interactive Media, University of Miami, Miami, FL, United States of America
| | | | - Suhrud M. Rajguru
- Department of Biomedical Engineering, University of Miami, Miami, FL, United States of America
| | - Sebastian A. Ausili
- Department of Otolaryngology, University of Miami, Miami, FL, United States of America
| |
Collapse
|
9
|
Hamada N, Kunimura H, Matsuoka M, Oda H, Hiraoka K. Advanced cueing of auditory stimulus to the head induces body sway in the direction opposite to the stimulus site during quiet stance in male participants. Front Hum Neurosci 2022; 16:1028700. [PMID: 36569476 PMCID: PMC9775284 DOI: 10.3389/fnhum.2022.1028700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Under certain conditions, a tactile stimulus to the head induces the movement of the head away from the stimulus, and this is thought to be caused by a defense mechanism. In this study, we tested our hypothesis that predicting the stimulus site of the head in a quiet stance activates the defense mechanism, causing a body to sway to keep the head away from the stimulus. Fourteen healthy male participants aged 31.2 ± 6.8 years participated in this study. A visual cue predicting the forthcoming stimulus site (forehead, left side of the head, right side of the head, or back of the head) was given. Four seconds after this cue, an auditory or electrical tactile stimulus was given at the site predicted by the cue. The cue predicting the tactile stimulus site of the head did not induce a body sway. The cue predicting the auditory stimulus to the back of the head induced a forward body sway, and the cue predicting the stimulus to the forehead induced a backward body sway. The cue predicting the auditory stimulus to the left side of the head induced a rightward body sway, and the cue predicting the stimulus to the right side of the head induced a leftward body sway. These findings support our hypothesis that predicting the auditory stimulus site of the head induces a body sway in a quiet stance to keep the head away from the stimulus. The right gastrocnemius muscle contributes to the control of the body sway in the anterior-posterior axis related to this defense mechanism.
Collapse
Affiliation(s)
- Naoki Hamada
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Hiroshi Kunimura
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Masakazu Matsuoka
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Hitoshi Oda
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Habikino, Japan
| | - Koichi Hiraoka
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan,*Correspondence: Koichi Hiraoka
| |
Collapse
|
10
|
Gessa E, Giovanelli E, Spinella D, Verdelet G, Farnè A, Frau GN, Pavani F, Valzolgher C. Spontaneous head-movements improve sound localization in aging adults with hearing loss. Front Hum Neurosci 2022; 16:1026056. [PMID: 36310849 PMCID: PMC9609159 DOI: 10.3389/fnhum.2022.1026056] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 09/21/2022] [Indexed: 11/04/2023] Open
Abstract
Moving the head while a sound is playing improves its localization in human listeners, in children and adults, with or without hearing problems. It remains to be ascertained if this benefit can also extend to aging adults with hearing-loss, a population in which spatial hearing difficulties are often documented and intervention solutions are scant. Here we examined performance of elderly adults (61-82 years old) with symmetrical or asymmetrical age-related hearing-loss, while they localized sounds with their head fixed or free to move. Using motion-tracking in combination with free-field sound delivery in visual virtual reality, we tested participants in two auditory spatial tasks: front-back discrimination and 3D sound localization in front space. Front-back discrimination was easier for participants with symmetrical compared to asymmetrical hearing-loss, yet both groups reduced their front-back errors when head-movements were allowed. In 3D sound localization, free head-movements reduced errors in the horizontal dimension and in a composite measure that computed errors in 3D space. Errors in 3D space improved for participants with asymmetrical hearing-impairment when the head was free to move. These preliminary findings extend to aging adults with hearing-loss the literature on the advantage of head-movements on sound localization, and suggest that the disparity of auditory cues at the two ears can modulate this benefit. These results point to the possibility of taking advantage of self-regulation strategies and active behavior when promoting spatial hearing skills.
Collapse
Affiliation(s)
- Elena Gessa
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Giovanelli
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | | | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | - Alessandro Farnè
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | | | - Francesco Pavani
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| | - Chiara Valzolgher
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| |
Collapse
|
11
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
12
|
Russell MK. Age and Auditory Spatial Perception in Humans: Review of Behavioral Findings and Suggestions for Future Research. Front Psychol 2022; 13:831670. [PMID: 35250777 PMCID: PMC8888835 DOI: 10.3389/fpsyg.2022.831670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
It has been well documented, and fairly well known, that concomitant with an increase in chronological age is a corresponding increase in sensory impairment. As most people realize, our hearing suffers as we get older; hence, the increased need for hearing aids. The first portion of the present paper is how the change in age apparently affects auditory judgments of sound source position. A summary of the literature evaluating the changes in the perception of sound source location and the perception of sound source motion as a function of chronological age is presented. The review is limited to empirical studies with behavioral findings involving humans. It is the view of the author that we have an immensely limited understanding of how chronological age affects perception of space when based on sound. In the latter part of the paper, discussion is given to how auditory spatial perception is traditionally conducted in the laboratory. Theoretically, beneficial reasons exist for conducting research in the manner it has been. Nonetheless, from an ecological perspective, the vast majority of previous research can be considered unnatural and greatly lacking in ecological validity. Suggestions for an alternative and more ecologically valid approach to the investigation of auditory spatial perception are proposed. It is believed an ecological approach to auditory spatial perception will enhance our understanding of the extent to which individuals perceive sound source location and how those perceptual judgments change with an increase in chronological age.
Collapse
|
13
|
Abstract
For many years, clinicians have understood the advantages of listening with two ears compared with one. In addition to improved speech intelligibility in quiet, noisy, and reverberant environments, binaural versus monaural listening improves perceived sound quality and decreases the effort listeners must expend to understand a target voice of interest or to monitor a multitude of potential target voices. For most individuals with bilateral hearing impairment, the body of evidence collected across decades of research has also found that the provision of two compared with one hearing aid yields significant benefit for the user. This article briefly summarizes the major advantages of binaural compared with monaural hearing, followed by a detailed description of the related technological advances in modern hearing aids. Aspects related to the communication and exchange of data between the left and right hearing aids are discussed together with typical algorithmic approaches implemented in modern hearing aids.
Collapse
|
14
|
Bernstein JGW, Phatak SA, Schuchman GI, Stakhovskaya OA, Rivera AL, Brungart DS. Single-Sided Deafness Cochlear Implant Sound-Localization Behavior With Multiple Concurrent Sources. Ear Hear 2021. [PMID: 34320529 DOI: 10.1097/AUD.0000000000001089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments. This study examined SSD-CI sound localization in a complex scenario where a target sound was added to or removed from a mixture of other environmental sounds, while tracking head movements to assess behavioral strategy. DESIGN Eleven CI users with normal hearing or moderate hearing loss in the contralateral ear completed a sound-localization task in monaural (CI-OFF) and bilateral (CI-ON) configurations. Ten of the listeners were also tested before CI activation to examine longitudinal effects. Two-second environmental sound samples, looped to create 4- or 10-sec trials, were presented in a spherical array of 26 loudspeakers encompassing ±144° azimuth and ±30° elevation at a 1-m radius. The target sound was presented alone (localize task) or concurrently with one or three additional sources presented to different loudspeakers, with the target cued by being added to (Add) or removed from (Rem) the mixture after 6 sec. A head-mounted tracker recorded movements in six dimensions (three for location, three for orientation). Mixed-model regression was used to examine target sound-identification accuracy, localization accuracy, and head movement. Angular and translational head movements were analyzed both before and after the target was switched on or off. RESULTS Listeners showed improved localization accuracy in the CI-ON configuration, but there was no interaction with test condition and no effect of the CI on sound-identification performance. Although high-frequency hearing loss in the unimplanted ear reduced localization accuracy and sound-identification performance, the magnitude of the CI localization benefit was independent of hearing loss. The CI reduced the magnitude of gross head movements used during the task in the azimuthal rotation and translational dimensions, both while the target sound was present (in all conditions) and during the anticipatory period before the target was switched on (in the Add condition). There was no change in pre- versus post-activation CI-OFF performance. CONCLUSIONS These results extend previous findings, demonstrating a CI localization benefit in a complex listening scenario that includes environmental and behavioral elements encountered in everyday listening conditions. The CI also reduced the magnitude of gross head movements used to perform the task. This was the case even before the target sound was added to the mixture. This suggests that a CI can reduce the need for physical movement both in anticipation of an upcoming sound event and while actively localizing the target sound. Overall, these results show that for SSD listeners, a CI can improve localization in a complex sound environment and reduce the amount of physical movement used.
Collapse
|
15
|
Macaulay EJ, Hartmann WM. Localization of tones in a room by moving listeners. J Acoust Soc Am 2021; 149:4159. [PMID: 34241422 DOI: 10.1121/10.0005045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Accepted: 05/04/2021] [Indexed: 06/13/2023]
Abstract
It is difficult to localize the source of a tone in a room because standing waves lead to complicated interaural differences that become uninterpretable localization cues. This paper tests the conjecture that localization improves if the listener can move to explore the complicated sound field over space and time. Listener head and torso movements were free and uninstructed. Experiments at low and high frequencies with eight human listeners in a relatively dry room indicated some modest improvement when listeners were allowed to move, especially at high frequencies. The experiments sought to understand listener dynamic localization strategies in detail. Head position and orientation were tracked electronically, and ear-canal signals were recorded throughout the 9 s of each moving localization trial. The availability of complete physical information enabled the testing of two model strategies: (1) relative null strategy, using instantaneous zeros of the listener-related source angle; and (2) inferred source strategy, using a continuum of apparent source locations implied by the listener's instantaneous forward direction and listener-related source angle. The predicted sources were given weights determined by the listener motion. Both models were statistically successful in coping with a great variety of listener motions and temporally evolving cues.
Collapse
Affiliation(s)
- Eric J Macaulay
- Department of Physics and Astronomy Michigan State University, 567 Wilson Rd., East Lansing, Michigan, 48824, USA
| | - William M Hartmann
- Department of Physics and Astronomy Michigan State University, 567 Wilson Rd., East Lansing, Michigan, 48824, USA
| |
Collapse
|
16
|
Mieda T, Kokubu M. Blind footballers direct their head towards an approaching ball during ball trapping. Sci Rep 2020; 10:20246. [PMID: 33219244 PMCID: PMC7679380 DOI: 10.1038/s41598-020-77049-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 11/05/2020] [Indexed: 12/02/2022] Open
Abstract
In blind football, players predict the sound location of a ball to underpin the success of ball trapping. It is currently unknown whether blind footballers use head movements as a strategy for trapping a moving ball. This study investigated characteristics of head rotations in blind footballers during ball trapping compared to sighted nonathletes. Participants performed trapping an approaching ball using their right foot. Head and trunk rotation angles in the sagittal plane, and head rotation angles in the horizontal plane were measured during ball trapping. The blind footballers showed a larger downward head rotation angle, as well as higher performance at the time of ball trapping than did the sighted nonathletes. However, no significant differences between the groups were found with regards to the horizontal head rotation angle and the downward trunk rotation angle. The blind footballers consistently showed a larger relative angle of downward head rotation from an early time point after ball launching to the moment of ball trapping. These results suggest that blind footballers couple downward head rotation with the movement of an approaching ball, to ensure that the ball is kept in a consistent egocentric direction relative to the head throughout ball trapping.
Collapse
Affiliation(s)
- Takumi Mieda
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan.
| | - Masahiro Kokubu
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| |
Collapse
|
17
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
18
|
Fu D, Weber C, Yang G, Kerzel M, Nan W, Barros P, Wu H, Liu X, Wermter S. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective. Front Integr Neurosci 2020; 14:10. [PMID: 32174816 PMCID: PMC7056875 DOI: 10.3389/fnint.2020.00010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/11/2020] [Indexed: 11/13/2022] Open
Abstract
Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Cornelius Weber
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Guochun Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Weizhi Nan
- Department of Psychology, Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Pablo Barros
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Haiyan Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
19
|
Abstract
Earlier studies have demonstrated that blind footballers are more accurate in identifying sound direction with less front-back confusion than sighted and blind non-football playing individuals. However, it is unknown whether blind footballers are faster than sighted footballers and nonathletes in identifying sound direction using auditory cues. Here, the present study aimed to investigate the auditory reaction times (RTs) and response accuracy of blind footballers during auditory RT tasks, including the identification of sound direction. Participants executed goal-directed stepping towards the loudspeaker as quickly and accurately as possible after identifying the sound direction. Simple, two-choice, and four-choice auditory RT tasks were completed. The results revealed that blind footballers had shorter RTs than sighted footballers in the choice RT tasks, but not in the simple RT task. These findings suggest that blind footballers are faster in identifying sound direction based on auditory cues, which is an essential perceptual-cognitive skill specific to blind football.
Collapse
Affiliation(s)
- Takumi Mieda
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan.
| | - Masahiro Kokubu
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Mayumi Saito
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| |
Collapse
|
20
|
Bonne N, Hanson J, Gauvrit F, Risoud M, Vincent C. Long‐term evaluation of sound localisation in single‐sided deaf adults fitted with a BAHA device. Clin Otolaryngol 2019; 44:898-904. [DOI: 10.1111/coa.13381] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 05/02/2019] [Accepted: 05/20/2019] [Indexed: 11/28/2022]
Affiliation(s)
| | | | - Fanny Gauvrit
- Service d'Otologie et d'OtoneurologieCHU de Lille Lille France
| | - Michaël Risoud
- Service d'Otologie et d'OtoneurologieCHU de Lille Lille France
| | | |
Collapse
|
21
|
Yost WA, Pastore MT. Individual listener differences in azimuthal front-back reversals. J Acoust Soc Am 2019; 146:2709. [PMID: 31671982 PMCID: PMC6814437 DOI: 10.1121/1.5129555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 09/19/2019] [Accepted: 09/28/2019] [Indexed: 06/10/2023]
Abstract
Thirty-two listeners participated in experiments involving five filtered noises when listeners kept their eyes open or closed, for stimuli of short or long duration, and for stimuli that were presented at random locations or in a largely rotational procession. Individual differences in the proportion of front-back reversals (FBRs) were measured. There were strong positive correlations between the proportion of FBRs for any one filtered noise, but not when FBRs were compared across different filtered-noise conditions. The results suggest that, for each individual listener, the rate of FBRs is stable for any one filtered noise, but not across filtered noises.
Collapse
Affiliation(s)
- William A Yost
- Spatial Hearing Laboratory, College of Health Solutions, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - M Torben Pastore
- Spatial Hearing Laboratory, College of Health Solutions, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|
22
|
Abstract
This study developed a wearable hearing-assist system that can identify the direction of a sound source while using short-term interaural time differences (ITDs) of sound pressure and convey the sound source direction to a hearing-impaired person via vibrators that are attached to his or her shoulders. This system, which is equipped with two microphones, could dynamically detect and convey the direction of front, side, and even rear sound sources. A male subject was able to turn his head toward continuous or intermittent sound sources within approximately 2.8 s when wearing the developed system. The sound source direction is probably overestimated when the interval between the two ears is smaller. When the subject can utilize vision, this may help in tracking the location of the target sound source, especially if the target comes into view, and it may shorten the tracking period.
Collapse
|
23
|
Abstract
Making small head movements facilitates spatial hearing by resolving front-back confusions, otherwise common in free field sound source localization. The changes in interaural time difference (ITD) in response to head rotation provide a robust front-back cue, but whether interaural level difference (ILD) can be used as a dynamic cue is not clear. Therefore, the purpose of the present study was to assess the usefulness of dynamic ILD as a localization cue. The results show that human listeners were capable of correctly indicating the front-back dimension of high-frequency sinusoids based on level dynamics in free field conditions, but only if a wide movement range was allowed (±40∘). When the free field conditions were replaced by simplistic headphone stimulation, front-back responses were in agreement with the simulated source directions even with relatively small movement ranges (±5∘), whenever monaural sound level and ILD changed monotonically in response to head rotation. In conclusion, human listeners can use level dynamics as a front-back localization cue when the dynamics are monotonic. However, in free field conditions and particularly for narrowband target signals, this is often not the case. Therefore, the primary limiting factor in the use of dynamic level cues resides in the acoustic domain behavior of the cue itself, rather than in potential processing limitations or strategies of the human auditory system.
Collapse
Affiliation(s)
- Henri Pöntynen
- Aalto Acoustics Lab, Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, FI-02150, Espoo, Finland.
| | - Nelli H Salminen
- Aalto Acoustics Lab, Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, FI-02150, Espoo, Finland
| |
Collapse
|
24
|
Abstract
OBJECTIVES We report on the ability of patients fit with bilateral cochlear implants (CIs) to distinguish the front-back location of sound sources both with and without head movements. At issue was (i) whether CI patients are more prone to front-back confusions than normal hearing listeners for wideband, high-frequency stimuli; and (ii) if CI patients can utilize dynamic binaural difference cues, in tandem with their own head rotation, to resolve these front-back confusions. Front-back confusions offer a binary metric to gain insight into CI patients' ability to localize sound sources under dynamic conditions not generally measured in laboratory settings where both the sound source and patient are static. DESIGN Three-second duration Gaussian noise samples were bandpass filtered to 2 to 8 kHz and presented from one of six loudspeaker locations located 60° apart, surrounding the listener. Perceived sound source localization for seven listeners bilaterally implanted with CIs, was tested under conditions where the patient faced forward and did not move their head and under conditions where they were encouraged to moderately rotate their head. The same conditions were repeated for 5 of the patients with one implant turned off (the implant at the better ear remained on). A control group of normal hearing listeners was also tested for a baseline of comparison. RESULTS All seven CI patients demonstrated a high rate of front-back confusions when their head was stationary (41.9%). The proportion of front-back confusions was reduced to 6.7% when these patients were allowed to rotate their head within a range of approximately ± 30°. When only one implant was turned on, listeners' localization acuity suffered greatly. In these conditions, head movement or the lack thereof made little difference to listeners' performance. CONCLUSIONS Bilateral implantation can offer CI listeners the ability to track dynamic auditory spatial difference cues and compare these changes to changes in their own head position, resulting in a reduced rate of front-back confusions. This suggests that, for these patients, estimates of auditory acuity based solely on static laboratory settings may underestimate their real-world localization abilities.
Collapse
|
25
|
Brimijoin WO, Akeroyd MA. The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location. J Am Acad Audiol 2018; 27:588-600. [PMID: 27406664 DOI: 10.3766/jaaa.15101] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. PURPOSE To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues. RESEARCH DESIGN We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. STUDY SAMPLE We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. DATA COLLECTION AND ANALYSIS Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. RESULTS Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. CONCLUSIONS Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, Glasgow, UK
| | | |
Collapse
|
26
|
Archer-Boyd AW, Holman JA, Brimijoin WO. The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids. Hear Res 2017; 357:64-72. [PMID: 29223929 PMCID: PMC5759949 DOI: 10.1016/j.heares.2017.11.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 11/20/2017] [Accepted: 11/26/2017] [Indexed: 11/28/2022]
Abstract
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB. Investigated the minimum signal-to-noise ratio (SNR) required to localize a target. Head movement to targets at varying SNRs and locations was measured. Orienting towards a new off-axis target became difficult below −6 dB SNR. An ideal directional microphone should not attenuate off-axis sources by > 12 dB.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK; MRC Cognition & Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Jack A Holman
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| | - W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| |
Collapse
|
27
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
28
|
Deshpande N, Braasch J. Blind localization and segregation of two sources including a binaural head movement model. J Acoust Soc Am 2017; 142:EL113. [PMID: 28764424 DOI: 10.1121/1.4986800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This study investigates how virtual head rotations can improve a binaural model's ability to segregate speech signals. The model takes two mixed speech sources spatialized to unique azimuth positions and localizes them. The model virtually rotates its head to orient itself for the maximum signal-to-noise ratio for extracting the target. An equalization-cancellation approach is used to generate a binary mask for the target based on localization cues. The mask is then overlaid onto the mixed signal's spectrogram to extract the target from the mixture. Improvement in signal-to-noise ratios from head rotation approaches over 30 dB.
Collapse
Affiliation(s)
- Nikhil Deshpande
- School of Architecture, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180, USA ,
| | - Jonas Braasch
- School of Architecture, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180, USA ,
| |
Collapse
|
29
|
Abstract
A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Collapse
Affiliation(s)
- Stephen M. Town
- Ear Institute, University College London, London, United Kingdom
| | - W. Owen Brimijoin
- MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, United Kingdom
| | | |
Collapse
|
30
|
Hendrickx E, Stitt P, Messonnier JC, Lyzwa JM, Katz BF, de Boishéraud C. Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis. J Acoust Soc Am 2017; 141:2011. [PMID: 28372109 DOI: 10.1121/1.4978612] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Binaural reproduction aims at recreating a realistic audio scene at the ears of the listener using headphones. In the real acoustic world, sound sources tend to be externalized (that is, perceived to be emanating from a source out in the world) rather than internalized (that is, perceived to be emanating from inside the head). Unfortunately, several studies report a collapse of externalization, especially with frontal and rear virtual sources, when listening to binaural content using non-individualized Head-Related Transfer Functions (HRTFs). The present study examines whether or not head movements coupled with a head tracking device can compensate for this collapse. For each presentation, a speech stimulus was presented over headphones at different azimuths, using several intermixed sets of non-individualized HRTFs for the binaural rendering. The head tracker could either be active or inactive, and the subjects could either be asked to rotate their heads or to keep them as stationary as possible. After each presentation, subjects reported to what extent the stimulus had been externalized. In contrast to several previous studies, results showed that head movements can substantially enhance externalization, especially for frontal and rear sources, and that externalization can persist once the subject has stopped moving his/her head.
Collapse
Affiliation(s)
- Etienne Hendrickx
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Peter Stitt
- Audio Acoustics Group, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur, CNRS, Université Paris-Saclay, 91405 Orsay, France
| | - Jean-Christophe Messonnier
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Jean-Marc Lyzwa
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Brian Fg Katz
- Sorbonne Universités, Université Pierre et Marie Curie Univ Paris 06, CNRS, Institut d'Alembert, 75005 Paris, France
| | - Catherine de Boishéraud
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| |
Collapse
|
31
|
Freeman TCA, Culling JF, Akeroyd MA, Brimijoin WO. Auditory compensation for head rotation is incomplete. J Exp Psychol Hum Percept Perform 2017; 43:371-380. [PMID: 27841453 PMCID: PMC5289217 DOI: 10.1037/xhp0000321] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Revised: 08/24/2016] [Accepted: 09/01/2016] [Indexed: 01/25/2023]
Abstract
Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These "extraretinal signals" compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the "movement gain" relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Michael A Akeroyd
- Medical Research Council Institute of Hearing Research, University of Nottingham
| | - W Owen Brimijoin
- Medical Research Council/Chief Scientist Office Institute of Hearing Research-Scottish Section, Glasgow Royal Infirmary
| |
Collapse
|
32
|
Abstract
Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity.
Collapse
Affiliation(s)
- Akio Honda
- Yamanashi Eiwa College, Yamanashi, Japan
| | | | | | | |
Collapse
|
33
|
Abstract
Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision. As an echolocating bat tracks a moving target, it produces head waggles and adjusts the separation of the tips of its ears to enhance cues for target detection and localization. These findings suggest parallels in active sensing between echolocation and vision. As animals operate in the natural environment, they must detect and process relevant sensory information embedded in complex and noisy signals. One strategy to overcome this challenge is to use active sensing or behavioral adjustments to extract sensory information from a selected region of the environment. We studied one of nature’s champions in auditory active sensing—the echolocating bat—to understand how this animal extracts task-relevant acoustic cues to detect and track a moving target. The bat produces high-frequency vocalizations and processes information carried by returning echoes to navigate and catch prey. This animal serves as an excellent model of active sensing because both sonar signal transmission and echo reception are under the animal’s active control. We used high-speed stereo video images of the bat’s head and ear movements, along with synchronized audio recordings, to study how the bat coordinates adaptive motor behaviors when detecting and tracking moving prey. We found that the bat synchronizes changes in sonar vocal production with changes in the movements of the head and ears to enhance acoustic cues for target detection and localization.
Collapse
Affiliation(s)
- Melville J. Wohlgemuth
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
- * E-mail:
| | - Ninad B. Kothari
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Cynthia F. Moss
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
34
|
Trapeau R, Aubrais V, Schönwiesner M. Fast and persistent adaptation to new spectral cues for sound localization suggests a many-to-one mapping mechanism. J Acoust Soc Am 2016; 140:879. [PMID: 27586720 DOI: 10.1121/1.4960568] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The adult human auditory system can adapt to changes in spectral cues for sound localization. This plasticity was demonstrated by changing the shape of the pinna with earmolds. Previous results indicate that participants regain localization accuracy after several weeks of adaptation and that the adapted state is retained for at least one week without earmolds. No aftereffect was observed after mold removal, but any aftereffect may be too short to be observed when responses are averaged over many trials. This work investigated the lack of aftereffect by analyzing single-trial responses and modifying visual, auditory, and tactile information during the localization task. Results showed that participants localized accurately immediately after mold removal, even at the first stimulus presentation. Knowledge of the stimulus spectrum, tactile information about the absence of the earmolds, and visual feedback were not necessary to localize accurately after adaptation. Part of the adaptation persisted for one month without molds. The results are consistent with the hypothesis of a many-to-one mapping of the spectral cues, in which several spectral profiles are simultaneously associated with one sound location. Additionally, participants with acoustically more informative spectral cues localized sounds more accurately, and larger acoustical disturbances by the molds reduced adaptation success.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| | - Valérie Aubrais
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| |
Collapse
|
35
|
Abstract
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts.
Collapse
Affiliation(s)
- Janet L Ruhland
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| | - Amy E Jones
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| | - Tom C T Yin
- Department of Neuroscience and Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin
| |
Collapse
|
36
|
Abstract
Objectives: Although directional microphones on a hearing aid provide a signal-to-noise ratio benefit in a noisy background, the amount of benefit is dependent on how close the signal of interest is to the front of the user. It is assumed that when the signal of interest is off-axis, users can reorient themselves to the signal to make use of the directional microphones to improve signal-to-noise ratio. The present study tested this assumption by measuring the head-orienting behavior of bilaterally fit hearing-impaired individuals with their microphones set to omnidirectional and directional modes. The authors hypothesized that listeners using directional microphones would have greater difficulty in rapidly and accurately orienting to off-axis signals than they would when using omnidirectional microphones. Design: The authors instructed hearing-impaired individuals to turn and face a female talker in simultaneous surrounding male-talker babble. Participants pressed a button when they felt they were accurately oriented in the direction of the female talker. Participants completed three blocks of trials with their hearing aids in omnidirectional mode and three blocks in directional mode, with mode order randomized. Using a Vicon motion tracking system, the authors measured head position and computed fixation error, fixation latency, trajectory complexity, and proportion of misorientations. Results: Results showed that for larger off-axis target angles, listeners using directional microphones took longer to reach their targets than they did when using omnidirectional microphones, although they were just as accurate. They also used more complex movements and frequently made initial turns in the wrong direction. For smaller off-axis target angles, this pattern was reversed, and listeners using directional microphones oriented more quickly and smoothly to the targets than when using omnidirectional microphones. Conclusions: The authors argue that an increase in movement complexity indicates a switch from a simple orienting movement to a search behavior. For the most off-axis target angles, listeners using directional microphones appear to not know which direction to turn, so they pick a direction at random and simply rotate their heads until the signal becomes more audible. The changes in fixation latency and head orientation trajectories suggest that the decrease in off-axis audibility is a primary concern in the use of directional microphones, and listeners could experience a loss of initial target speech while turning toward a new signal of interest. If hearing-aid users are to receive maximum directional benefit in noisy environments, both adaptive directionality in hearing aids and clinical advice on using directional microphones should take head movement and orientation behavior into account.
Collapse
|
37
|
Wallmeier L, Wiegrebe L. Self-motion facilitates echo-acoustic orientation in humans. R Soc Open Sci 2014; 1:140185. [PMID: 26064556 PMCID: PMC4448837 DOI: 10.1098/rsos.140185] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2014] [Accepted: 10/17/2014] [Indexed: 06/04/2023]
Abstract
The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory-motor interactions, and on possible optimization strategies underlying echolocation in humans.
Collapse
Affiliation(s)
- Ludwig Wallmeier
- Division of Neurobiology, Department Biologie II, Ludwig-Maximilians-Universität München, Großhadernerstr. 2, 82152 Planegg-Martinsried, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Großhadernerstr. 2, 82152 Planegg-Martinsried, Germany
| | - Lutz Wiegrebe
- Division of Neurobiology, Department Biologie II, Ludwig-Maximilians-Universität München, Großhadernerstr. 2, 82152 Planegg-Martinsried, Germany
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Großhadernerstr. 2, 82152 Planegg-Martinsried, Germany
| |
Collapse
|
38
|
Brimijoin WO, Akeroyd MA. The moving minimum audible angle is smaller during self motion than during source motion. Front Neurosci 2014; 8:273. [PMID: 25228856 PMCID: PMC4151253 DOI: 10.3389/fnins.2014.00273] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 08/12/2014] [Indexed: 11/17/2022] Open
Abstract
We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system—in a manner not unlike the vestibulo-ocular reflex—works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create “head-stabilized” signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this “self-motion” condition we measured MMAA in a second “source-motion” condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1–2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues.
Collapse
Affiliation(s)
- W Owen Brimijoin
- Scottish Section, Institute of Hearing Research, Medical Research Council/Chief Scientist Office Glasgow, UK
| | - Michael A Akeroyd
- Scottish Section, Institute of Hearing Research, Medical Research Council/Chief Scientist Office Glasgow, UK
| |
Collapse
|
39
|
Abstract
Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy.
Collapse
Affiliation(s)
- Ken I McAnally
- Aerospace Division, Defence Science and Technology Organisation Melbourne, VIC, Australia
| | - Russell L Martin
- Aerospace Division, Defence Science and Technology Organisation Melbourne, VIC, Australia
| |
Collapse
|
40
|
Brimijoin WO, Boyd AW, Akeroyd MA. The contribution of head movement to the externalization and internalization of sounds. PLoS One 2013; 8:e83068. [PMID: 24312677 DOI: 10.1371/journal.pone.0083068] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2013] [Accepted: 11/07/2013] [Indexed: 11/19/2022] Open
Abstract
Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception.
Collapse
|
41
|
Mueller MF, Meisenbacher K, Lai WK, Dillier N. Sound localization with bilateral cochlear implants in noise: how much do head movements contribute to localization? Cochlear Implants Int 2013; 15:36-42. [PMID: 23684420 DOI: 10.1179/1754762813y.0000000040] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Bilateral cochlear implant (CI) users encounter difficulties in localizing sound sources in everyday environments, especially in the presence of background noise and reverberation. They tend to show large directional errors and front-back confusions compared to normal hearing (NH) subjects in the same conditions. In this study, the ability of bilateral CI users to use head movements to improve sound source localization was evaluated. Speech sentences of 0.5, 2, and 4.5 seconds were presented in noise to the listeners in conditions with and without head movements. The results show that for middle and long signal durations, the CI users could significantly reduce the number of front-back confusions. The angular accuracy, however, did not improve. Analysis of head trajectories showed that the CI users had great difficulties in moving their head towards the position of the source, whereas the NH listeners targeted the source loudspeaker correctly.
Collapse
|
42
|
Feinkohl A, Borzeszkowski KM, Klump GM. Effect of head turns on the localization accuracy of sounds in the European starling (Sturnus vulgaris). Behav Brain Res 2013; 256:669-76. [PMID: 24035879 DOI: 10.1016/j.bbr.2013.08.038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 08/19/2013] [Accepted: 08/22/2013] [Indexed: 11/25/2022]
Abstract
Long signal durations that represent closed-loop conditions permit responses based on the sensory feedback during the presentation of the stimulus, while short stimulus durations that represent open-loop conditions do not allow for directed head turns during signal presentation. A previous study showed that for broadband noise stimuli, the minimum audible angle (MAA) of the European starling (Sturnus vulgaris) is smaller under closed-loop compared to open-loop conditions (Feinkohl & Klump, 2013). Head turns represent a possible strategy to improve sound localization cues under closed-loop conditions. In this study, we analyze the influence of head turns on the starling MAA for broadband noise and 2 kHz tones under closed-loop and open-loop conditions. The starlings made more head turns under closed-loop conditions compared to open-loop conditions. Under closed-loop conditions, their sensitivity for discriminating sound source positions was best if they turned their head once or more per stimulus presentation. We discuss potential cues generated from head turns under closed-loop conditions.
Collapse
Affiliation(s)
- Arne Feinkohl
- Cluster of Excellence Hearing4all, Animal Physiology and Behaviour Group, Department of Neuroscience, School of Medicine and Health Sciences, University of Oldenburg, D-26111 Oldenburg, Germany
| | | | | |
Collapse
|
43
|
Honda A, Shibata H, Hidaka S, Gyoba J, Iwaya Y, Suzuki Y. Effects of head movement and proprioceptive feedback in training of sound localization. Iperception 2013; 4:253-64. [PMID: 24349686 PMCID: PMC3859569 DOI: 10.1068/i0522] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2012] [Revised: 04/19/2013] [Indexed: 11/29/2022] Open
Abstract
We investigated the effects of listeners' head movements and proprioceptive feedback during sound localization practice on the subsequent accuracy of sound localization performance. The effects were examined under both restricted and unrestricted head movement conditions in the practice stage. In both cases, the participants were divided into two groups: a feedback group performed a sound localization drill with accurate proprioceptive feedback; a control group conducted it without the feedback. Results showed that (1) sound localization practice, while allowing for free head movement, led to improvement in sound localization performance and decreased actual angular errors along the horizontal plane, and that (2) proprioceptive feedback during practice decreased actual angular errors in the vertical plane. Our findings suggest that unrestricted head movement and proprioceptive feedback during sound localization training enhance perceptual motor learning by enabling listeners to use variable auditory cues and proprioceptive information.
Collapse
Affiliation(s)
- Akio Honda
- Research Institute of Electrical Communication, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi 980-8577, Japan Currently at: Department of Welfare Psychology, Tohoku Fukushi University, 1-8-1, Kunimi, Aoba-ku, Sendai, Miyagi 981-8522, Japan; e-mail:
| | - Hiroshi Shibata
- Research Institute of Electrical Communication, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi 980-8577, Japan; Department of Psychology, Graduate School of Arts and Letters, Tohoku University, 27-1, Kawauchi, Aoba-ku, Sendai, Miyagi 980-8576, Japan Currently at: Faculty of Medical Science and Welfare, Tohoku Bunka Gakuen University, 6-45-1, Kunimi, Aoba-ku, Sendai, Miyagi 981-0943, Japan; e-mail:
| | - Souta Hidaka
- Department of Psychology, Rikkyo University, 1-2-26, Kitano, Niiza-shi, Saitama 352-8558, Japan; e-mail:
| | - Jiro Gyoba
- Department of Psychology, Graduate School of Arts and Letters, Tohoku University, 27-1, Kawauchi, Aoba-ku, Sendai, Miyagi 980-8576, Japan; e-mail:
| | - Yukio Iwaya
- Research Institute of Electrical Communication, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi 980-8577, Japan Currently at: Faculty of Engineering, Tohoku Gakuin University, 1-13-1, Chuo, Tagajo, Miyagi 985-8537, Japan; e-mail:
| | - Yôiti Suzuki
- Research Institute of Electrical Communication, Tohoku University, 2-1-1, Katahira, Aoba-ku, Sendai, Miyagi 980-8577, Japan; e-mail:
| |
Collapse
|
44
|
Abstract
BACKGROUND Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
Collapse
Affiliation(s)
- Wataru Teramoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.
| | | | | | | | | |
Collapse
|
45
|
Abstract
Auditory scene analysis requires the listener to parse the incoming flow of acoustic information into perceptual "streams," such as sentences from a single talker in the midst of background noise. Behavioral and neural data show that the formation of streams is not instantaneous; rather, streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we investigated the effect of changes induced by voluntary head motion on streaming. We used a telepresence robot in a virtual reality setup to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent source location, and changes in motor or attentional processes. The results showed that self-motion influenced streaming in at least two ways. Right after the onset of movement, self-motion always induced some resetting of perceptual organization to one stream, even when the acoustic scene itself had not changed. Then, after the motion, the prevalent organization was rapidly biased by the binaural cues discovered through motion. Auditory scene analysis thus appears to be a dynamic process that is affected by the active sensing of the environment.
Collapse
|
46
|
Abstract
For individuals with autism spectrum disorder or 'ASD' the ability to accurately process and interpret auditory information is often difficult. Here we review behavioural, neurophysiological and imaging literature pertaining to this field with the aim of providing a comprehensive account of auditory processing in ASD, and thus an effective tool to aid further research. Literature was sourced from peer-reviewed journals published over the last two decades which best represent research conducted in these areas. Findings show substantial evidence for atypical processing of auditory information in ASD at behavioural and neural levels. Abnormalities are diverse, ranging from atypical perception of various low-level perceptual features (i.e. pitch, loudness) to processing of more complex auditory information such as prosody. Trends across studies suggest auditory processing impairments in ASD are most likely to present during processing of complex auditory information and are more severe for speech than for non-speech stimuli. The interpretation of these findings with respect to various cognitive accounts of ASD is discussed and suggestions offered for further research.
Collapse
Affiliation(s)
- K O'Connor
- Department of Communication Disorders, University of Canterbury, Christchurch 8140, New Zealand.
| |
Collapse
|
47
|
Irving S, Moore DR. Training sound localization in normal hearing listeners with and without a unilateral ear plug. Hear Res 2011; 280:100-8. [PMID: 21640176 DOI: 10.1016/j.heares.2011.04.020] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 03/23/2011] [Accepted: 04/26/2011] [Indexed: 11/19/2022]
Abstract
Surprisingly little is known about the ability of adult human listeners to learn to localize sounds in the free field. In this study, we presented broadband noise bursts at 24 equally spaced locations in a 360° horizontal plane in both normal-hearing conditions and when listeners were fitted with a unilateral earplug. Localization improvement was found over the initial four training sessions, prior to plug insertion which produced an immediate and profound impairment in localization, particularly on the side of the plug. Subsequent training with the plug in place over the next 5 days showed continually improving performance (learning) up to the 4th day. Following plug removal, localization immediately returned to pre-plug levels. These results showed that task-specific training can improve localization ability in normal-hearing conditions and that training also improves performance during a unilateral conductive hearing loss. It has been suggested that the process of learning is due to a gradual reweighting of the available cues to develop a new location map. The return to preplug learning performance suggests that the original location map is preserved despite the formation of a new map, and is in agreement with other reported findings.
Collapse
Affiliation(s)
- Samuel Irving
- University Park, MRC Institute of Hearing Research, Nottingham NG7 2RD, United Kingdom
| | | |
Collapse
|
48
|
Van den Bogaert T, Carette E, Wouters J. Sound source localization using hearing aids with microphones placed behind-the-ear, in-the-canal, and in-the-pinna. Int J Audiol 2011; 50:164-76. [PMID: 21208034 DOI: 10.3109/14992027.2010.537376] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE The effect of different commercial hearing aids on the ability to resolve front-back confusions and on sound localization in the frontal horizontal and vertical plane was studied. DESIGN Commercial hearing aids with a microphone placed in-the-ear-canal (ITC), behind-the-ear (BTE), and in-the-pinna (ITP) were evaluated in the frontal and full horizontal plane, and in the frontal vertical plane. STUDY SAMPLE A group of 13 hearing-impaired subjects evaluated the hearing aids. Nine normal-hearing listeners were used as a reference group. RESULTS AND CONCLUSIONS Differences in sound localization in the front-back dimension were found for different hearing aids. A large inter-subject variability was found during the front-back and elevation experiments. With ITP or ITC microphones, almost all natural spectral information was preserved. One of the BTE hearing aids, which is equipped with a directional microphone configuration, generated a sufficient amount of spectral cues to allow front-back discrimination. No significant effect of hearing aids on elevation performance in the frontal vertical plane was observed. Hearing-impaired subjects reached the same performance with and without the different hearing aids. In the unaided condition, a frequency-specific audibility correction was applied. Some of the hearing-impaired listeners reached normal hearing performance with this correction.
Collapse
|
49
|
Brimijoin WO, McShefferty D, Akeroyd MA. Auditory and visual orienting responses in listeners with and without hearing-impairment. J Acoust Soc Am 2010; 127:3678-88. [PMID: 20550266 PMCID: PMC4338612 DOI: 10.1121/1.3409488] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0 degrees loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees/s in both groups, corresponding to a rate of change of approximately 1 micros of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly changing velocities, directional reversals, and frequent fixation angle corrections.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC Institute of Hearing, Research Scottish Section, Glasgow Royal Infirmary, Glasgow G31 2ER, United Kingdom.
| | | | | |
Collapse
|
50
|
Majdak P, Goupell MJ, Laback B. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training. Atten Percept Psychophys 2010; 72:454-69. [PMID: 20139459 DOI: 10.3758/APP.72.2.454] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
Collapse
|