1
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024:00003446-990000000-00262. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
2
|
Anderson SR, Burg E, Suveg L, Litovsky RY. Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. Trends Hear 2024; 28:23312165241229880. [PMID: 38545645 PMCID: PMC10976506 DOI: 10.1177/23312165241229880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 04/01/2024] Open
Abstract
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical School, Aurora, CO, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lukas Suveg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
3
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
4
|
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
5
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
6
|
Archer-Boyd AW, Harland A, Goehring T, Carlyon RP. An online implementation of a measure of spectro-temporal processing by cochlear-implant listeners. JASA EXPRESS LETTERS 2023; 3:014402. [PMID: 36725534 DOI: 10.1121/10.0016838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
The spectro-temporal ripple for investigating processor effectiveness (STRIPES) test is a psychophysical measure of spectro-temporal resolution in cochlear-implant (CI) listeners. It has been validated using direct-line input and loudspeaker presentation with listeners of the Advanced Bionics CI. This article investigates the suitability of an online application using wireless streaming (webSTRIPES) as a remote test. It reports a strong across-listener correlation between STRIPES thresholds obtained using laboratory testing with loudspeaker presentation vs remote testing with streaming presentation, with no significant difference in STRIPES thresholds between the two measures. WebSTRIPES also produced comparable and robust thresholds with users of the Cochlear CI.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom , , ,
| | - Andrew Harland
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom , , ,
| | - Tobias Goehring
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom , , ,
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom , , ,
| |
Collapse
|
7
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
8
|
Moore BCJ. Listening to Music Through Hearing Aids: Potential Lessons for Cochlear Implants. Trends Hear 2022; 26:23312165211072969. [PMID: 35179052 PMCID: PMC8859663 DOI: 10.1177/23312165211072969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Some of the problems experienced by users of hearing aids (HAs) when listening to music are relevant to cochlear implants (CIs). One problem is related to the high peak levels (up to 120 dB SPL) that occur in live music. Some HAs and CIs overload at such levels, because of the limited dynamic range of the microphones and analogue-to-digital converters (ADCs), leading to perceived distortion. Potential solutions are to use 24-bit ADCs or to include an adjustable gain between the microphones and the ADCs. A related problem is how to squeeze the wide dynamic range of music into the limited dynamic range of the user, which can be only 6-20 dB for CI users. In HAs, this is usually done via multi-channel amplitude compression (automatic gain control, AGC). In CIs, a single-channel front-end AGC is applied to the broadband input signal or a control signal derived from a running average of the broadband signal level is used to control the mapping of the channel envelope magnitude to an electrical signal. This introduces several problems: (1) an intense narrowband signal (e.g. a strong bass sound) reduces the level for all frequency components, making some parts of the music harder to hear; (2) the AGC introduces cross-modulation effects that can make a steady sound (e.g. sustained strings or a sung note) appear to fluctuate in level. Potential solutions are to use several frequency channels to create slowly varying gain-control signals and to use slow-acting (or dual time-constant) AGC rather than fast-acting AGC.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, 2152University of Cambridge, Cambridge, England
| |
Collapse
|
9
|
Anderson SR, Jocewicz R, Kan A, Zhu J, Tzeng S, Litovsky RY. Sound source localization patterns and bilateral cochlear implants: Age at onset of deafness effects. PLoS One 2022; 17:e0263516. [PMID: 35134072 PMCID: PMC8824335 DOI: 10.1371/journal.pone.0263516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 01/20/2022] [Indexed: 11/24/2022] Open
Abstract
The ability to determine a sound’s location is critical in everyday life. However, sound source localization is severely compromised for patients with hearing loss who receive bilateral cochlear implants (BiCIs). Several patient factors relate to poorer performance in listeners with BiCIs, associated with auditory deprivation, experience, and age. Critically, characteristic errors are made by patients with BiCIs (e.g., medial responses at lateral target locations), and the relationship between patient factors and the type of errors made by patients has seldom been investigated across individuals. In the present study, several different types of analysis were used to understand localization errors and their relationship with patient-dependent factors (selected based on their robustness of prediction). Binaural hearing experience is required for developing accurate localization skills, auditory deprivation is associated with degradation of the auditory periphery, and aging leads to poorer temporal resolution. Therefore, it was hypothesized that earlier onsets of deafness would be associated with poorer localization acuity and longer periods without BiCI stimulation or older age would lead to greater amounts of variability in localization responses. A novel machine learning approach was introduced to characterize the types of errors made by listeners with BiCIs, making them simple to interpret and generalizable to everyday experience. Sound localization performance was measured in 48 listeners with BiCIs using pink noise trains presented in free-field. Our results suggest that older age at testing and earlier onset of deafness are associated with greater average error, particularly for sound sources near the center of the head, consistent with previous research. The machine learning analysis revealed that variability of localization responses tended to be greater for individuals with earlier compared to later onsets of deafness. These results suggest that early bilateral hearing is essential for best sound source localization outcomes in listeners with BiCIs.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- * E-mail:
| | - Rachael Jocewicz
- Department of Audiology, Stanford University, Stanford, California, United States of America
| | - Alan Kan
- School of Engineering, Macquarie University, New South Wales, Australia
| | - Jun Zhu
- Department of Statistics, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - ShengLi Tzeng
- Department of Mathematics, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| |
Collapse
|
10
|
The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors. Ear Hear 2022; 43:1262-1272. [PMID: 34882619 PMCID: PMC9174346 DOI: 10.1097/aud.0000000000001179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance. DESIGN Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues. RESULTS There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21). CONCLUSIONS Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.
Collapse
|
11
|
Novel Approaches to Measure Spatial Release From Masking in Children With Bilateral Cochlear Implants. Ear Hear 2022; 43:101-114. [PMID: 34133400 PMCID: PMC8671563 DOI: 10.1097/aud.0000000000001080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES To investigate the role of auditory cues for spatial release from masking (SRM) in children with bilateral cochlear implants (BiCIs) and compare their performance with children with normal hearing (NH). To quantify the contribution to speech intelligibility benefits from individual auditory cues: head shadow, binaural redundancy, and interaural differences; as well as from multiple cues: SRM and binaural squelch. To assess SRM using a novel approach of adaptive target-masker angular separation, which provides a more functionally relevant assessment in realistic complex auditory environments. DESIGN Children fitted with BiCIs (N = 11) and with NH (N = 18) were tested in virtual acoustic space that was simulated using head-related transfer functions measured from individual children with BiCIs behind the ear and from a standard head and torso simulator for all NH children. In experiment I, by comparing speech reception thresholds across 4 test conditions that varied in target-masker spatial separation (colocated versus separated at 180°) and listening conditions (monaural versus binaural/bilateral listening), intelligibility benefits were derived for individual auditory cues for SRM. In experiment II, SRM was quantified using a novel measure to find the minimum angular separation (MAS) between the target and masker to achieve a fixed 20% intelligibility improvement. Target speech was fixed at either +90 or -90° azimuth on the side closer to the better ear (+90° for all NH children) and masker locations were adaptively varied. RESULTS In experiment I, children with BiCIs as a group had smaller intelligibility benefits from head shadow than NH children. No group difference was observed in benefits from binaural redundancy or interaural difference cues. In both groups of children, individuals who gained a larger benefit from interaural differences relied less on monaural head shadow, and vice versa. In experiment II, all children with BiCIs demonstrated measurable MAS thresholds <180° and on average larger than that from NH children. Eight of 11 children with BiCIs and all NH children had a MAS threshold <90°, requiring interaural differences only to gain the target intelligibility benefit; whereas the other 3 children with BiCIs had a MAS between 120° and 137°, requiring monaural head shadow for SRM. CONCLUSIONS When target and maskers were separated at 180° on opposing hemifields, children with BiCIs demonstrated greater intelligibility benefits from head shadow and interaural differences than previous literature showed with a smaller separation. Children with BiCIs demonstrated individual differences in using auditory cues for SRM. From the MAS thresholds, more than half of the children with BiCIs demonstrated robust access to interaural differences without needing additional monaural head shadow for SRM. Both experiments led to the conclusion that individualized fitting strategies in the bilateral devices may be warranted to maximize spatial hearing for children with BiCIs in complex auditory environments.
Collapse
|
12
|
Gray WO, Mayo PG, Goupell MJ, Brown AD. Transmission of Binaural Cues by Bilateral Cochlear Implants: Examining the Impacts of Bilaterally Independent Spectral Peak-Picking, Pulse Timing, and Compression. Trends Hear 2021; 25:23312165211030411. [PMID: 34293981 PMCID: PMC8785329 DOI: 10.1177/23312165211030411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Acoustic hearing listeners use binaural cues—interaural time differences (ITDs) and interaural level differences (ILDs)—for localization and segregation of sound sources in the horizontal plane. Cochlear implant users now often receive two implants (bilateral cochlear implants [BiCIs]) rather than one, with the goal to provide access to these cues. However, BiCI listeners often experience difficulty with binaural tasks. Most BiCIs use independent sound processors at each ear; it has often been suggested that such independence may degrade the transmission of binaural cues, particularly ITDs. Here, we report empirical measurements of binaural cue transmission via BiCIs implementing a common “n-of-m” spectral peak-picking stimulation strategy. Measurements were completed for speech and nonspeech stimuli presented to an acoustic manikin “fitted” with BiCI sound processors. Electric outputs from the BiCIs and acoustic outputs from the manikin’s in-ear microphones were recorded simultaneously, enabling comparison of electric and acoustic binaural cues. For source locations away from the midline, BiCI binaural cues, particularly envelope ITD cues, were found to be degraded by asymmetric spectral peak-picking. In addition, pulse amplitude saturation due to nonlinear level mapping yielded smaller ILDs at higher presentation levels. Finally, while individual pulses conveyed a spurious “drifting” ITD, consistent with independent left and right processor clocks, such variation was not evident in transmitted envelope ITDs. Results point to avenues for improvement of BiCI technology and may prove useful in the interpretation of BiCI spatial hearing outcomes reported in prior and future studies.
Collapse
Affiliation(s)
- William O Gray
- Department of Speech and Hearing Sciences, 7284University of Washington, University of Washington, Seattle, United States
| | - Paul G Mayo
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, 7284University of Washington, University of Washington, Seattle, United States.,Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, United States
| |
Collapse
|
13
|
Fischer T, Schmid C, Kompis M, Mantokoudis G, Caversaccio M, Wimmer W. Effects of temporal fine structure preservation on spatial hearing in bilateral cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:673. [PMID: 34470279 DOI: 10.1121/10.0005732] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 07/02/2021] [Indexed: 06/13/2023]
Abstract
Typically, the coding strategies of cochlear implant audio processors discard acoustic temporal fine structure information (TFS), which may be related to the poor perception of interaural time differences (ITDs) and the resulting reduced spatial hearing capabilities compared to normal-hearing individuals. This study aimed to investigate to what extent bilateral cochlear implant (BiCI) recipients can exploit ITD cues provided by a TFS preserving coding strategy (FS4) in a series of sound field spatial hearing tests. As a baseline, we assessed the sensitivity to ITDs and binaural beats of 12 BiCI subjects with a coding strategy disregarding fine structure (HDCIS) and the FS4 strategy. For 250 Hz pure-tone stimuli but not for broadband noise, the BiCI users had significantly improved ITD discrimination using the FS4 strategy. In the binaural beat detection task and the broadband sound localization, spatial discrimination, and tracking tasks, no significant differences between the two tested coding strategies were observed. These results suggest that ITD sensitivity did not generalize to broadband stimuli or sound field spatial hearing tests, suggesting that it would not be useful for real-world listening.
Collapse
Affiliation(s)
- T Fischer
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - C Schmid
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - M Kompis
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - G Mantokoudis
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - M Caversaccio
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - W Wimmer
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| |
Collapse
|
14
|
Pastore MT, Natale SJ, Clayton C, Dorman MF, Yost WA, Zhou Y. Effects of Head Movements on Sound-Source Localization in Single-Sided Deaf Patients With Their Cochlear Implant On Versus Off. Ear Hear 2021; 41:1660-1674. [PMID: 33136640 PMCID: PMC7772279 DOI: 10.1097/aud.0000000000000882] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES We investigated the ability of single-sided deaf listeners implanted with a cochlear implant (SSD-CI) to (1) determine the front-back and left-right location of sound sources presented from loudspeakers surrounding the listener and (2) use small head rotations to further improve their localization performance. The resulting behavioral data were used for further analyses investigating the value of so-called "monaural" spectral shape cues for front-back sound source localization. DESIGN Eight SSD-CI patients were tested with their cochlear implant (CI) on and off. Eight normal-hearing (NH) listeners, with one ear plugged during the experiment, and another group of eight NH listeners, with neither ear plugged, were also tested. Gaussian noises of 3-sec duration were band-pass filtered to 2-8 kHz and presented from 1 of 6 loudspeakers surrounding the listener, spaced 60° apart. Perceived sound source localization was tested under conditions where the patients faced forward with the head stationary, and under conditions where they rotated their heads between (Equation is included in full-text article.). RESULTS (1) Under stationary listener conditions, unilaterally-plugged NH listeners and SSD-CI listeners (with their CIs both on and off) were nearly at chance in determining the front-back location of high-frequency sound sources. (2) Allowing rotational head movements improved performance in both the front-back and left-right dimensions for all listeners. (3) For SSD-CI patients with their CI turned off, head rotations substantially reduced front-back reversals, and the combination of turning on the CI with head rotations led to near-perfect resolution of front-back sound source location. (4) Turning on the CI also improved left-right localization performance. (5) As expected, NH listeners with both ears unplugged localized to the correct front-back and left-right hemifields both with and without head movements. CONCLUSIONS Although SSD-CI listeners demonstrate a relatively poor ability to distinguish the front-back location of sound sources when their head is stationary, their performance is substantially improved with head movements. Most of this improvement occurs when the CI is off, suggesting that the NH ear does most of the "work" in this regard, though some additional gain is introduced with turning the CI on. During head turns, these listeners appear to primarily rely on comparing changes in head position to changes in monaural level cues produced by the direction-dependent attenuation of high-frequency sounds that result from acoustic head shadowing. In this way, SSD-CI listeners overcome limitations to the reliability of monaural spectral and level cues under stationary conditions. SSD-CI listeners may have learned, through chronic monaural experience before CI implantation, or with the relatively impoverished spatial cues provided by their CI-implanted ear, to exploit the monaural level cue. Unilaterally-plugged NH listeners were also able to use this cue during the experiment to realize approximately the same magnitude of benefit from head turns just minutes after plugging, though their performance was less accurate than that of the SSD-CI listeners, both with and without their CI turned on.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | | | | | | | | | | |
Collapse
|
15
|
Fumero MJ, Eustaquio-Martín A, Gorospe JM, Polo López R, Gutiérrez Revilla MA, Lassaletta L, Schatzer R, Nopp P, Stohl JS, Lopez-Poveda EA. A state-of-the-art implementation of a binaural cochlear-implant sound coding strategy inspired by the medial olivocochlear reflex. Hear Res 2021; 409:108320. [PMID: 34348202 DOI: 10.1016/j.heares.2021.108320] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/13/2021] [Accepted: 07/21/2021] [Indexed: 11/30/2022]
Abstract
Cochlear implant (CI) users find it hard and effortful to understand speech in noise with current devices. Binaural CI sound processing inspired by the contralateral medial olivocochlear (MOC) reflex (an approach termed the 'MOC strategy') can improve speech-in-noise recognition for CI users. All reported evaluations of this strategy, however, disregarded automatic gain control (AGC) and fine-structure (FS) processing, two standard features in some current CI devices. To better assess the potential of implementing the MOC strategy in contemporary CIs, here, we compare intelligibility with and without MOC processing in combination with linked AGC and FS processing. Speech reception thresholds (SRTs) were compared for an FS and a MOC-FS strategy for sentences in steady and fluctuating noises, for various speech levels, in bilateral and unilateral listening modes, and for multiple spatial configurations of the speech and noise sources. Word recall scores and verbal response times in a word recognition test (two proxies for listening effort) were also compared for the two strategies in quiet and in steady noise at 5 dB signal-to-noise ratio (SNR) and the individual SRT. In steady noise, mean SRTs were always equal or better with the MOC-FS than with the standard FS strategy, both in bilateral (the mean and largest improvement across spatial configurations and speech levels were 0.8 and 2.2 dB, respectively) and unilateral listening (mean and largest improvement of 1.7 and 2.1 dB, respectively). In fluctuating noise and in bilateral listening, SRTs were equal for the two strategies. Word recall scores and verbal response times were not significantly affected by the test SNR or the processing strategy. Results show that MOC processing can be combined with linked AGC and FS processing. Compared to using FS processing alone, combined MOC-FS processing can improve speech intelligibility in noise without affecting word recall scores or verbal response times.
Collapse
Affiliation(s)
- Milagros J Fumero
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain
| | - José M Gorospe
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain; Servicio de Otorrinolaringología, Hospital Universitario de Salamanca, Salamanca 37007 Spain
| | - Rubén Polo López
- Servicio de Otorrinolaringología, Hospital Universitario Ramón y Cajal, Madrid 28034 Spain
| | | | - Luis Lassaletta
- Servicio de Otorrinolaringología, Hospital Universitario La Paz, Madrid 28046 Spain; IdiPAZ Research Institute, Madrid, Spain; Biomedical Research Networking Centre on Rare Diseases (CIBERER-U761), Institute of Health Carlos III, Madrid, Spain
| | | | | | - Joshua S Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, USA
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain.; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca Salamanca 37007 Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007 Spain.
| |
Collapse
|
16
|
Gajecki T, Nogueira W. Enhancement of interaural level differences for bilateral cochlear implant users. Hear Res 2021; 409:108313. [PMID: 34340023 DOI: 10.1016/j.heares.2021.108313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 05/19/2021] [Accepted: 07/12/2021] [Indexed: 11/15/2022]
Abstract
Bilateral cochlear implant (BiCI) users do not localize sounds as well as normal hearing (NH) listeners do. NH listeners rely on two binaural cues to localize sounds in the horizontal plane, namely interaural level differences (ILDs) and interaural time differences. BiCI systems, however, convey these cues poorly. In this work, we investigated two methods to improve the coding of ILDs in BiCIs. The first method enhances ILDs by applying an artificial current-versus-angle function to the clinical levels delivered by the basal electrodes of the CI contralateral to the target sound. The second method enhances ILDs by using bilaterally linked N-of-M band selection. Results indicate that the participants were able to discriminate the location of the sound more accurately at narrow azimuths when the ILD enhancement was applied, compared to when they were using natural ILDs. Also, the results show that linking the band selection had a positive effect on left/right discrimination accuracy at larger azimuths for three out of the 10 tested participants, when compared to unlinked band selection. Based on these results, we conclude that ILD enhancement besides linked N-of-M band selection can help some BiCI participants to discriminate sound sources on the frontal horizontal plane.
Collapse
Affiliation(s)
- Tom Gajecki
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany.
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany.
| |
Collapse
|
17
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2811-2824. [PMID: 34100627 PMCID: PMC8632503 DOI: 10.1044/2021_jslhr-20-00493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 12/31/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.
Collapse
|
18
|
Bakal TA, Milvae KD, Chen C, Goupell MJ. Head Shadow, Summation, and Squelch in Bilateral Cochlear-Implant Users With Linked Automatic Gain Controls. Trends Hear 2021; 25:23312165211018147. [PMID: 34057387 PMCID: PMC8182628 DOI: 10.1177/23312165211018147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Speech understanding in noise is poorer in bilateral cochlear-implant (BICI) users compared to normal-hearing counterparts. Independent automatic gain controls (AGCs) may contribute to this because adjusting processor gain independently can reduce interaural level differences that BICI listeners rely on for bilateral benefits. Bilaterally linked AGCs may improve bilateral benefits by increasing the magnitude of interaural level differences. The effects of linked AGCs on bilateral benefits (summation, head shadow, and squelch) were measured in nine BICI users. Speech understanding for a target talker at 0° masked by a single talker at 0°, 90°, or −90° azimuth was assessed under headphones with sentences at five target-to-masker ratios. Research processors were used to manipulate AGC type (independent or linked) and test ear (left, right, or both). Sentence recall was measured in quiet to quantify individual interaural asymmetry in functional performance. The results showed that AGC type did not significantly change performance or bilateral benefits. Interaural functional asymmetries, however, interacted with ear such that greater summation and squelch benefit occurred when there was larger functional asymmetry, and interacted with interferer location such that smaller head shadow benefit occurred when there was larger functional asymmetry. The larger benefits for those with larger asymmetry were driven by improvements from adding a better-performing ear, rather than a true binaural-hearing benefit. In summary, linked AGCs did not significantly change bilateral benefits in cases of speech-on-speech masking with a single-talker masker, but there was also no strong detriment across a range of target-to-masker ratios, within a small and diverse BICI listener population.
Collapse
Affiliation(s)
- Taylor A Bakal
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Kristina DeRoy Milvae
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| | - Chen Chen
- Advanced Bionics LLC, Research and Technology, Valencia, California, United States
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States
| |
Collapse
|
19
|
Archer-Boyd AW, Carlyon RP. Further simulations of the effect of cochlear-implant pre-processing and head movement on interaural level differences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:506. [PMID: 34340491 PMCID: PMC7613192 DOI: 10.1121/10.0005647] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 06/29/2021] [Indexed: 06/13/2023]
Abstract
We simulated the effect of several automatic gain control (AGC) and AGC-like systems and head movement on the output levels, and resulting interaural level differences (ILDs) produced by bilateral cochlear-implant (CI) processors. The simulated AGC systems included unlinked AGCs with a range of parameter settings, linked AGCs, and two proprietary multi-channel systems used in contemporary CIs. The results show that over the range of values used clinically, the parameters that most strongly affect dynamic ILDs are the release time and compression ratio. Linking AGCs preserves ILDs at the expense of monaural level changes and, possibly, comfortable listening level. Multichannel AGCs can whiten output spectra, and/or distort the dynamic changes in ILD that occur during and after head movement. We propose that an unlinked compressor with a ratio of approximately 3:1 and a release time of 300-500 ms can preserve the shape of dynamic ILDs, without causing large spectral distortions or sacrificing listening comfort.
Collapse
|
20
|
Coudert A, Gaveau V, Gatel J, Verdelet G, Salemme R, Farne A, Pavani F, Truy E. Spatial Hearing Difficulties in Reaching Space in Bilateral Cochlear Implant Children Improve With Head Movements. Ear Hear 2021; 43:192-205. [PMID: 34225320 PMCID: PMC8694251 DOI: 10.1097/aud.0000000000001090] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Supplemental Digital Content is available in the text. The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities.
Collapse
Affiliation(s)
- Aurélie Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, Lyon, France Department of Pediatric Otolaryngology-Head & Neck Surgery, Femme Mere Enfant Hospital, Hospices Civils de Lyon, Lyon, France Department of Otolaryngology-Head & Neck Surgery, Edouard Herriot Hospital, Hospices Civils de Lyon, Lyon, France University of Lyon 1, Lyon, France Hospices Civils de Lyon, Neuro-immersion Platform, Lyon, France Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | | | | | | | | | | | | | | |
Collapse
|
21
|
Pinna-Imitating Microphone Directionality Improves Sound Localization and Discrimination in Bilateral Cochlear Implant Users. Ear Hear 2020; 42:214-222. [PMID: 32701730 PMCID: PMC7757747 DOI: 10.1097/aud.0000000000000912] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES To compare the sound-source localization, discrimination, and tracking performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinna-imitating (PI) microphone directionality modes. DESIGN Twelve experienced bilateral cochlear implant users participated in the study. Their audio processors were fitted with two different programs featuring either the OMNI or PI mode. Each subject performed static and dynamic sound field spatial hearing tests in the horizontal plane. The static tests consisted of an absolute sound localization test and a minimum audible angle test, which was measured at eight azimuth directions. Dynamic sound tracking ability was evaluated by the subject correctly indicating the direction of a moving stimulus along two circular paths around the subject. RESULTS PI mode led to statistically significant sound localization and discrimination improvements. For static sound localization, the greatest benefit was a reduction in the number of front-back confusions. The front-back confusion rate was reduced from 47% with OMNI mode to 35% with PI mode (p = 0.03). The ability to discriminate sound sources straight to the sides (90° and 270° angle) was only possible with PI mode. The averaged minimum audible angle value for the 90° and 270° angle positions decreased from a 75.5° to a 37.7° angle when PI mode was used (p < 0.001). Furthermore, a non-significant trend towards an improvement in the ability to track moving sound sources was observed for both trajectories tested (p = 0.34 and p = 0.27). CONCLUSIONS Our results demonstrate that PI mode can lead to improved spatial hearing performance in bilateral cochlear implant users, mainly as a consequence of improved front-back discrimination with PI mode.
Collapse
|
22
|
Anderson SR, Easter K, Goupell MJ. Effects of rate and age in processing interaural time and level differences in normal-hearing and bilateral cochlear-implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3232. [PMID: 31795662 PMCID: PMC6948219 DOI: 10.1121/1.5130384] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 09/27/2019] [Accepted: 10/02/2019] [Indexed: 05/25/2023]
Abstract
Bilateral cochlear implants (BICIs) provide improved sound localization and speech understanding in noise compared to unilateral CIs. However, normal-hearing (NH) listeners demonstrate superior binaural processing abilities compared to BICI listeners. This investigation sought to understand differences between NH and BICI listeners' processing of interaural time differences (ITDs) and interaural level differences (ILDs) as a function of fine-structure and envelope rate using an intracranial lateralization task. The NH listeners were presented band-limited acoustical pulse trains and sinusoidally amplitude-modulated tones using headphones, and the BICI listeners were presented single-electrode electrical pulse trains using direct stimulation. Lateralization range increased as fine-structure rate increased for ILDs in BICI listeners. Lateralization range decreased for rates above 100 Hz for fine-structure ITDs, but decreased for rates lower or higher than 100 Hz for envelope ITDs in both groups. Lateralization ranges for ITDs were smaller for BICI listeners on average. After controlling for age, older listeners showed smaller lateralization ranges and BICI listeners had a more rapid decline for ITD sensitivity at 300 pulses per second. This work suggests that age confounds comparisons between NH and BICI listeners in temporal processing tasks and that some NH-BICI binaural processing differences persist even when age differences are adequately addressed.
Collapse
Affiliation(s)
- Sean R Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Kyle Easter
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|