1
|
Schleich P, Wirtz C, Schatzer R, Nopp P. Similar performance in sound localisation with unsynchronised and synchronised automatic gain controls in bilateral cochlear implant recipients. Int J Audiol 2025; 64:411-417. [PMID: 39075948 DOI: 10.1080/14992027.2024.2383700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 07/31/2024]
Abstract
OBJECTIVE One proposed method to improve sound localisation for bilateral cochlear implant (BiCI) users is to synchronise the automatic gain control (AGC) of both audio processors. In this study we tested whether AGC synchronisation in a dual-loop front-end processing scheme with a 3:1 compression ratio improves sound localisation acuity. DESIGN Source identification in the frontal hemifield was tested in in an anechoic chamber as a function of (roving) presentation level. Three different methods of AGC synchronisation were compared to the standard unsynchronised approach. Both root mean square error (RMSE) and signed bias were calculated to evaluate sound localisation in the horizontal plane. STUDY SAMPLE Six BiCI users. RESULTS None of the three AGC synchronisation methods yielded significant improvements in either localisation error or bias, neither across presentation levels nor for individual presentation levels. For synchronised AGC, the pooled mean (standard deviation) localisation error of the three synchronisation methods was 24.7 (5.8) degrees RMSE, for unsynchronised AGC it was 27.4 (7.5) degrees. The localisation bias was 5.1 (5.5) degrees for synchronised AGC and 5.0 (3.8) for unsynchronised. CONCLUSIONS These findings do not support the hypothesis that the tested AGC synchronisation configurations improves localisation acuity in bilateral users of MED-EL cochlear implants.
Collapse
Affiliation(s)
| | | | | | - Peter Nopp
- MED-EL Medical Electronics, Innsbruck, Austria
| |
Collapse
|
2
|
Richardson BN, Kainerstorfer JM, Shinn-Cunningham BG, Brown CA. Magnified interaural level differences enhance binaural unmasking in bilateral cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1045-1056. [PMID: 39932277 PMCID: PMC11817532 DOI: 10.1121/10.0034869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 12/06/2024] [Accepted: 12/19/2024] [Indexed: 02/14/2025]
Abstract
Bilateral cochlear implant (BiCI) usage makes binaural benefits a possibility for implant users. Yet for BiCI users, limited access to interaural time difference (ITD) cues and reduced saliency of interaural level difference (ILD) cues restricts perceptual benefits of spatially separating a target from masker sounds. The present study explored whether magnifying ILD cues improves intelligibility of masked speech for BiCI listeners in a "symmetrical-masker" configuration, which ensures that neither ear benefits from a long-term positive target-to-masker ratio (TMR) due to naturally occurring ILD cues. ILD magnification estimates moment-to-moment ITDs in octave-wide frequency bands, and applies corresponding ILDs to the target-masker mixtures reaching the two ears at each specific time and frequency band. ILD magnification significantly improved intelligibility in two experiments: one with normal hearing (NH) listeners using vocoded stimuli and one with BiCI users. BiCI listeners showed no benefit of spatial separation between target and maskers with natural ILDs, even for the largest target-masker separation. Because ILD magnification relies on and manipulates only the mixed signals at each ear, the strategy never alters the monaural TMR in either ear at any time. Thus, the observed improvements to masked speech intelligibility come from binaural effects, likely from increased perceptual separation of the competing sources.
Collapse
Affiliation(s)
- Benjamin N Richardson
- Neuroscience Institute, Carnegie Mellon University, 4400 Fifth Avenue, Pittsburgh, Pennsylvania 15213, USA
| | - Jana M Kainerstorfer
- Neuroscience Institute, Carnegie Mellon University, 4400 Fifth Avenue, Pittsburgh, Pennsylvania 15213, USA
- Department of Biomedical Engineering, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213, USA
| | - Barbara G Shinn-Cunningham
- Neuroscience Institute, Carnegie Mellon University, 4400 Fifth Avenue, Pittsburgh, Pennsylvania 15213, USA
| | - Christopher A Brown
- Department of Communication Science and Disorders, University of Pittsburgh, 4028 Forbes Tower, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
3
|
Richardson BN, Kainerstorfer JM, Shinn-Cunningham BG, Brown CA. Magnified interaural level differences enhance binaural unmasking in bilateral cochlear implant users. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.03.597254. [PMID: 39314381 PMCID: PMC11418960 DOI: 10.1101/2024.06.03.597254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Bilateral cochlear implant (BiCI) usage makes binaural benefits a possibility for implant users. Yet for BiCI users, limited access to interaural time difference (ITD) cues and reduced saliency of interaural level difference (ILD) cues restricts perceptual benefits of spatially separating a target from masker sounds. The present study explored whether magnifying ILD cues improves intelligibility of masked speech for BiCI listeners in a "symmetrical-masker" configuration, which ensures that neither ear benefits from a long-term positive target-to-masker ratio (TMR) due to naturally occurring ILD cues. ILD magnification estimates moment-to-moment ITDs in octave-wide frequency bands, and applies corresponding ILDs to the target-masker mixtures reaching the two ears at each specific time and frequency band. ILD magnification significantly improved intelligibility in two experiments: one with NH listeners using vocoded stimuli and one with BiCI users. BiCI listeners showed no benefit of spatial separation between target and maskers with natural ILDs, even for the largest target-masker separation. Because ILD magnification relies on and manipulates only the mixed signals at each ear, the strategy never alters the monaural TMR in either ear at any time. Thus, the observed improvements to masked speech intelligibility come from binaural effects, likely from increased perceptual separation of the competing sources.
Collapse
Affiliation(s)
| | - Jana M Kainerstorfer
- Neuroscience Institute, Carnegie Mellon University
- Biomedical Engineering, Carnegie Mellon University
| | | | | |
Collapse
|
4
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
5
|
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
6
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
7
|
Moore BCJ. Listening to Music Through Hearing Aids: Potential Lessons for Cochlear Implants. Trends Hear 2022; 26:23312165211072969. [PMID: 35179052 PMCID: PMC8859663 DOI: 10.1177/23312165211072969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Some of the problems experienced by users of hearing aids (HAs) when listening to music are relevant to cochlear implants (CIs). One problem is related to the high peak levels (up to 120 dB SPL) that occur in live music. Some HAs and CIs overload at such levels, because of the limited dynamic range of the microphones and analogue-to-digital converters (ADCs), leading to perceived distortion. Potential solutions are to use 24-bit ADCs or to include an adjustable gain between the microphones and the ADCs. A related problem is how to squeeze the wide dynamic range of music into the limited dynamic range of the user, which can be only 6-20 dB for CI users. In HAs, this is usually done via multi-channel amplitude compression (automatic gain control, AGC). In CIs, a single-channel front-end AGC is applied to the broadband input signal or a control signal derived from a running average of the broadband signal level is used to control the mapping of the channel envelope magnitude to an electrical signal. This introduces several problems: (1) an intense narrowband signal (e.g. a strong bass sound) reduces the level for all frequency components, making some parts of the music harder to hear; (2) the AGC introduces cross-modulation effects that can make a steady sound (e.g. sustained strings or a sung note) appear to fluctuate in level. Potential solutions are to use several frequency channels to create slowly varying gain-control signals and to use slow-acting (or dual time-constant) AGC rather than fast-acting AGC.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, 2152University of Cambridge, Cambridge, England
| |
Collapse
|