1
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024:00003446-990000000-00262. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
2
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.08.574475. [PMID: 38260458 PMCID: PMC10802496 DOI: 10.1101/2024.01.08.574475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
How we move our bodies affects how we perceive sound. For instance, we can explore an environment to seek out the source of a sound and we can use head movements to compensate for hearing loss. How we do this is not well understood because many auditory experiments are designed to limit head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. We then asked how auditory behavior was affected by hearing loss induced by surgical removal of the malleus from the middle ear. An innate behavior, the auditory startle response, was abolished by bilateral hearing loss and unaffected by unilateral hearing loss. Similarly, performance on the sound-seeking task drastically declined after bilateral hearing loss and did not recover. In striking contrast, mice with unilateral hearing loss were only transiently impaired on sound-seeking; over a recovery period of about a week, they regained high levels of performance, increasingly reliant on a different spatial sampling strategy. Thus, even in the face of permanent unilateral damage to the peripheral auditory system, mice recover their ability to perform a naturalistic sound-seeking task. This paradigm provides an opportunity to examine how body movement enables better hearing and resilient adaptation to sensory deprivation.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta GA 30322
| | | | | | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
| | - Kaitlyn A Brooks
- Department of Otolaryngology - Head and Neck Surgery, Emory University School of Medicine, Atlanta GA 30308
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta GA 30322
- Department of Biology, Emory College of Arts and Sciences, Atlanta GA 30322
| |
Collapse
|
3
|
Valzolgher C, Capra S, Gessa E, Rosi T, Giovanelli E, Pavani F. Sound localization in noisy contexts: performance, metacognitive evaluations and head movements. Cogn Res Princ Implic 2024; 9:4. [PMID: 38191869 PMCID: PMC10774233 DOI: 10.1186/s41235-023-00530-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/26/2023] [Indexed: 01/10/2024] Open
Abstract
Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Tommaso Rosi
- Department of Physics, University of Trento, Trento, Italy
| | - Elena Giovanelli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
4
|
Tetard S, Guigou C, Sonnet CE, Al Burshaid D, Charlery-Adèle A, Bozorg Grayeli A. Free-Field Hearing Test in Noise with Free Head Rotation for Evaluation of Monaural Hearing. J Clin Med 2023; 12:7143. [PMID: 38002755 PMCID: PMC10672306 DOI: 10.3390/jcm12227143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
There is a discrepancy between the hearing test results in patients with single-sided deafness (SSD) and their reported outcome measures. This is probably due to the presence of two elements in everyday situations: noise and head movements. We developed a stereo-audiometric test in noise with free head movements to evaluate movements and auditory performance in monaural and binaural conditions in normal hearing volunteers with one occluded ear. Tests were performed in the binaural condition (BIN), with the left ear (LEO) or the right ear occluded (REO). The signal was emitted by one of the seven speakers, placed every 30° in a semicircle, and the noise (cocktail party) by all speakers. Subjects turned their head freely to obtain the most comfortable listening position, then repeated 10 sentences in this position. In monaural conditions, the sums of rotations (head rotations for an optimal hearing position in degrees, random signal azimuth, 1 to 15 signal ad lib signal presentations) were higher (LEO 255 ± 212°, REO 308 ± 208° versus BIN 74 ± 76, p < 0.001, ANOVA) than those in the BIN condition and the discrimination score (out of 10) was lower than that in the BIN condition (LEO 5 ± 1, REO 7 ± 1 versus BIN 8 ± 1, respectively p < 0.001 and p < 0.05 ANOVA). In the monaural condition, total rotation and discrimination in noise were negatively correlated with difficulty (Pearson r = -0.68, p < 0.01 and -0.51, p < 0.05, respectively). Subjects' behaviors were different in optimizing their hearing in noise via head rotation. The evaluation of head movements seems to be a significant parameter in predicting the difficulty of monaural hearing in noisy environments.
Collapse
Affiliation(s)
- Stanley Tetard
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Caroline Guigou
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- ImViA, Laboratory of Imagery and Artificial Vision (EA 7535), Burgundy University, 21078 Dijon, France
| | - Charles-Edouard Sonnet
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- Amplifon Hearing Aid Center, 21000 Dijon, France
| | - Dhari Al Burshaid
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Ambre Charlery-Adèle
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Department of Otolaryngology-Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France
- ImViA, Laboratory of Imagery and Artificial Vision (EA 7535), Burgundy University, 21078 Dijon, France
| |
Collapse
|
5
|
Yost WA. Randomizing spectral cues used to resolve front-back reversals in sound-source localization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:661-670. [PMID: 37540095 PMCID: PMC10404140 DOI: 10.1121/10.0020563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 07/05/2023] [Accepted: 07/18/2023] [Indexed: 08/05/2023]
Abstract
Front-back reversals (FBRs) in sound-source localization tasks due to cone-of-confusion errors on the azimuth plane occur with some regularity, and their occurrence is listener-dependent. There are fewer FBRs for wideband, high-frequency sounds than for low-frequency sounds presumably because the sources of low-frequency sounds are localized on the basis of interaural differences (interaural time and level differences), which can lead to ambiguous responses. Spectral cues can aid in determining sound-source locations for wideband, high-frequency sounds, and such spectral cues do not lead to ambiguous responses. However, to what extent spectral features might aid sound-source localization is still not known. This paper explores conditions in which the spectral profile of two-octave wide noise bands, whose sources were localized on the azimuth plane, were randomly varied. The experiment demonstrated that such spectral profile randomization increased FBRs for high-frequency noise bands, presumably because whatever spectral features are used for sound-source localization were no longer as useful for resolving FBRs, and listeners relied on interaural differences for sound-source localization, which led to response ambiguities. Additionally, head rotation decreased FBRs in all cases, even when FBRs increased due to spectral profile randomization. In all cases, the occurrence of FBRs was listener-dependent.
Collapse
Affiliation(s)
- William A Yost
- Spatial Hearing Lab, College of Health Solutions, Arizona State University, Tempe, Arizona 85004, USA
| |
Collapse
|
6
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
7
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
8
|
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
9
|
Gessa E, Giovanelli E, Spinella D, Verdelet G, Farnè A, Frau GN, Pavani F, Valzolgher C. Spontaneous head-movements improve sound localization in aging adults with hearing loss. Front Hum Neurosci 2022; 16:1026056. [PMID: 36310849 PMCID: PMC9609159 DOI: 10.3389/fnhum.2022.1026056] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 09/21/2022] [Indexed: 11/04/2023] Open
Abstract
Moving the head while a sound is playing improves its localization in human listeners, in children and adults, with or without hearing problems. It remains to be ascertained if this benefit can also extend to aging adults with hearing-loss, a population in which spatial hearing difficulties are often documented and intervention solutions are scant. Here we examined performance of elderly adults (61-82 years old) with symmetrical or asymmetrical age-related hearing-loss, while they localized sounds with their head fixed or free to move. Using motion-tracking in combination with free-field sound delivery in visual virtual reality, we tested participants in two auditory spatial tasks: front-back discrimination and 3D sound localization in front space. Front-back discrimination was easier for participants with symmetrical compared to asymmetrical hearing-loss, yet both groups reduced their front-back errors when head-movements were allowed. In 3D sound localization, free head-movements reduced errors in the horizontal dimension and in a composite measure that computed errors in 3D space. Errors in 3D space improved for participants with asymmetrical hearing-impairment when the head was free to move. These preliminary findings extend to aging adults with hearing-loss the literature on the advantage of head-movements on sound localization, and suggest that the disparity of auditory cues at the two ears can modulate this benefit. These results point to the possibility of taking advantage of self-regulation strategies and active behavior when promoting spatial hearing skills.
Collapse
Affiliation(s)
- Elena Gessa
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Giovanelli
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | | | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | - Alessandro Farnè
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | | | - Francesco Pavani
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| | - Chiara Valzolgher
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| |
Collapse
|
10
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
11
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- * E-mail:
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
12
|
Pastore MT, Natale SJ, Clayton C, Dorman MF, Yost WA, Zhou Y. Effects of Head Movements on Sound-Source Localization in Single-Sided Deaf Patients With Their Cochlear Implant On Versus Off. Ear Hear 2021; 41:1660-1674. [PMID: 33136640 PMCID: PMC7772279 DOI: 10.1097/aud.0000000000000882] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES We investigated the ability of single-sided deaf listeners implanted with a cochlear implant (SSD-CI) to (1) determine the front-back and left-right location of sound sources presented from loudspeakers surrounding the listener and (2) use small head rotations to further improve their localization performance. The resulting behavioral data were used for further analyses investigating the value of so-called "monaural" spectral shape cues for front-back sound source localization. DESIGN Eight SSD-CI patients were tested with their cochlear implant (CI) on and off. Eight normal-hearing (NH) listeners, with one ear plugged during the experiment, and another group of eight NH listeners, with neither ear plugged, were also tested. Gaussian noises of 3-sec duration were band-pass filtered to 2-8 kHz and presented from 1 of 6 loudspeakers surrounding the listener, spaced 60° apart. Perceived sound source localization was tested under conditions where the patients faced forward with the head stationary, and under conditions where they rotated their heads between (Equation is included in full-text article.). RESULTS (1) Under stationary listener conditions, unilaterally-plugged NH listeners and SSD-CI listeners (with their CIs both on and off) were nearly at chance in determining the front-back location of high-frequency sound sources. (2) Allowing rotational head movements improved performance in both the front-back and left-right dimensions for all listeners. (3) For SSD-CI patients with their CI turned off, head rotations substantially reduced front-back reversals, and the combination of turning on the CI with head rotations led to near-perfect resolution of front-back sound source location. (4) Turning on the CI also improved left-right localization performance. (5) As expected, NH listeners with both ears unplugged localized to the correct front-back and left-right hemifields both with and without head movements. CONCLUSIONS Although SSD-CI listeners demonstrate a relatively poor ability to distinguish the front-back location of sound sources when their head is stationary, their performance is substantially improved with head movements. Most of this improvement occurs when the CI is off, suggesting that the NH ear does most of the "work" in this regard, though some additional gain is introduced with turning the CI on. During head turns, these listeners appear to primarily rely on comparing changes in head position to changes in monaural level cues produced by the direction-dependent attenuation of high-frequency sounds that result from acoustic head shadowing. In this way, SSD-CI listeners overcome limitations to the reliability of monaural spectral and level cues under stationary conditions. SSD-CI listeners may have learned, through chronic monaural experience before CI implantation, or with the relatively impoverished spatial cues provided by their CI-implanted ear, to exploit the monaural level cue. Unilaterally-plugged NH listeners were also able to use this cue during the experiment to realize approximately the same magnitude of benefit from head turns just minutes after plugging, though their performance was less accurate than that of the SSD-CI listeners, both with and without their CI turned on.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | | | | | | | | | | |
Collapse
|
13
|
Single-Sided Deafness Cochlear Implant Sound-Localization Behavior With Multiple Concurrent Sources. Ear Hear 2021; 43:206-219. [PMID: 34320529 DOI: 10.1097/aud.0000000000001089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments. This study examined SSD-CI sound localization in a complex scenario where a target sound was added to or removed from a mixture of other environmental sounds, while tracking head movements to assess behavioral strategy. DESIGN Eleven CI users with normal hearing or moderate hearing loss in the contralateral ear completed a sound-localization task in monaural (CI-OFF) and bilateral (CI-ON) configurations. Ten of the listeners were also tested before CI activation to examine longitudinal effects. Two-second environmental sound samples, looped to create 4- or 10-sec trials, were presented in a spherical array of 26 loudspeakers encompassing ±144° azimuth and ±30° elevation at a 1-m radius. The target sound was presented alone (localize task) or concurrently with one or three additional sources presented to different loudspeakers, with the target cued by being added to (Add) or removed from (Rem) the mixture after 6 sec. A head-mounted tracker recorded movements in six dimensions (three for location, three for orientation). Mixed-model regression was used to examine target sound-identification accuracy, localization accuracy, and head movement. Angular and translational head movements were analyzed both before and after the target was switched on or off. RESULTS Listeners showed improved localization accuracy in the CI-ON configuration, but there was no interaction with test condition and no effect of the CI on sound-identification performance. Although high-frequency hearing loss in the unimplanted ear reduced localization accuracy and sound-identification performance, the magnitude of the CI localization benefit was independent of hearing loss. The CI reduced the magnitude of gross head movements used during the task in the azimuthal rotation and translational dimensions, both while the target sound was present (in all conditions) and during the anticipatory period before the target was switched on (in the Add condition). There was no change in pre- versus post-activation CI-OFF performance. CONCLUSIONS These results extend previous findings, demonstrating a CI localization benefit in a complex listening scenario that includes environmental and behavioral elements encountered in everyday listening conditions. The CI also reduced the magnitude of gross head movements used to perform the task. This was the case even before the target sound was added to the mixture. This suggests that a CI can reduce the need for physical movement both in anticipation of an upcoming sound event and while actively localizing the target sound. Overall, these results show that for SSD listeners, a CI can improve localization in a complex sound environment and reduce the amount of physical movement used.
Collapse
|
14
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2811-2824. [PMID: 34100627 PMCID: PMC8632503 DOI: 10.1044/2021_jslhr-20-00493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 12/31/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.
Collapse
|
15
|
Coudert A, Gaveau V, Gatel J, Verdelet G, Salemme R, Farne A, Pavani F, Truy E. Spatial Hearing Difficulties in Reaching Space in Bilateral Cochlear Implant Children Improve With Head Movements. Ear Hear 2021; 43:192-205. [PMID: 34225320 PMCID: PMC8694251 DOI: 10.1097/aud.0000000000001090] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Supplemental Digital Content is available in the text. The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities.
Collapse
Affiliation(s)
- Aurélie Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, Lyon, France Department of Pediatric Otolaryngology-Head & Neck Surgery, Femme Mere Enfant Hospital, Hospices Civils de Lyon, Lyon, France Department of Otolaryngology-Head & Neck Surgery, Edouard Herriot Hospital, Hospices Civils de Lyon, Lyon, France University of Lyon 1, Lyon, France Hospices Civils de Lyon, Neuro-immersion Platform, Lyon, France Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Nishimura T, Hosoi H, Saito O, Shimokura R, Yamanaka T, Kitahara T. Sound localisation ability using cartilage conduction hearing aids in bilateral aural atresia. Int J Audiol 2020; 59:891-896. [DOI: 10.1080/14992027.2020.1802671] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Tadashi Nishimura
- Department of Otolaryngology-Head and Neck surgery, Nara Medical University, Kashihara, Nara, Japan
| | - Hiroshi Hosoi
- President’s Office, Nara Medical University, Kashihara, Nara, Japan
| | - Osamu Saito
- Department of Otolaryngology-Head and Neck surgery, Nara Medical University, Kashihara, Nara, Japan
| | - Ryota Shimokura
- Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| | - Toshiaki Yamanaka
- Department of Otolaryngology-Head and Neck surgery, Nara Medical University, Kashihara, Nara, Japan
| | - Tadashi Kitahara
- Department of Otolaryngology-Head and Neck surgery, Nara Medical University, Kashihara, Nara, Japan
| |
Collapse
|
17
|
Pinna-Imitating Microphone Directionality Improves Sound Localization and Discrimination in Bilateral Cochlear Implant Users. Ear Hear 2020; 42:214-222. [PMID: 32701730 PMCID: PMC7757747 DOI: 10.1097/aud.0000000000000912] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES To compare the sound-source localization, discrimination, and tracking performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinna-imitating (PI) microphone directionality modes. DESIGN Twelve experienced bilateral cochlear implant users participated in the study. Their audio processors were fitted with two different programs featuring either the OMNI or PI mode. Each subject performed static and dynamic sound field spatial hearing tests in the horizontal plane. The static tests consisted of an absolute sound localization test and a minimum audible angle test, which was measured at eight azimuth directions. Dynamic sound tracking ability was evaluated by the subject correctly indicating the direction of a moving stimulus along two circular paths around the subject. RESULTS PI mode led to statistically significant sound localization and discrimination improvements. For static sound localization, the greatest benefit was a reduction in the number of front-back confusions. The front-back confusion rate was reduced from 47% with OMNI mode to 35% with PI mode (p = 0.03). The ability to discriminate sound sources straight to the sides (90° and 270° angle) was only possible with PI mode. The averaged minimum audible angle value for the 90° and 270° angle positions decreased from a 75.5° to a 37.7° angle when PI mode was used (p < 0.001). Furthermore, a non-significant trend towards an improvement in the ability to track moving sound sources was observed for both trajectories tested (p = 0.34 and p = 0.27). CONCLUSIONS Our results demonstrate that PI mode can lead to improved spatial hearing performance in bilateral cochlear implant users, mainly as a consequence of improved front-back discrimination with PI mode.
Collapse
|
18
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
19
|
Yost WA, Pastore MT, Dorman MF. Sound source localization is a multisystem process. ACOUSTICAL SCIENCE AND TECHNOLOGY 2020; 41:113-120. [PMID: 34305431 PMCID: PMC8297655 DOI: 10.1250/ast.41.113] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.
Collapse
Affiliation(s)
- William A. Yost
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - M. Torben Pastore
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - Michael F. Dorman
- Cochlear Implant Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| |
Collapse
|
20
|
Archer-Boyd AW, Carlyon RP. Simulations of the effect of unlinked cochlear-implant automatic gain control and head movement on interaural level differences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:1389. [PMID: 31067937 PMCID: PMC6711771 DOI: 10.1121/1.5093623] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Accepted: 02/22/2019] [Indexed: 05/31/2023]
Abstract
This study simulated the effect of unlinked automatic gain control (AGC) and head movement on the output levels and resulting inter-aural level differences (ILDs) produced by bilateral cochlear implant (CI) processors. The angular extent and velocity of the head movements were varied in order to observe the interaction between unlinked AGC and head movement. Static, broadband input ILDs were greatly reduced by the high-ratio, slow-time-constant AGC used. The size of head-movement-induced dynamic ILDs depended more on the velocity and angular extent of the head movement than on the angular position of the source. The profiles of the dynamic, broadband output ILDs were very different from the dynamic, broadband input ILD profiles. Short-duration, high-velocity head movements resulted in dynamic output ILDs that continued to change after head movement had stopped. Analysis of narrowband, single-channel ILDs showed that static output ILDs were reduced across all frequencies, producing low-frequency ILDs of the opposite sign to the high-frequency ILDs. During head movements, low- and high-frequency ILDs also changed with opposite sign. The results showed that the ILDs presented to bilateral CI listeners during head turns were highly distorted by the interaction of the bilateral, unlinked AGC and the level changes induced by head movement.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
| | - Robert P Carlyon
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|