1
|
Buss E, Richter ME, Sloop AD, Dillon MT. Estimating Cochlear Implant Users' Sound Localization Abilities With Two Loudspeakers. Trends Hear 2025; 29:23312165251340864. [PMID: 40368405 PMCID: PMC12078988 DOI: 10.1177/23312165251340864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 03/30/2025] [Accepted: 04/23/2025] [Indexed: 05/16/2025] Open
Abstract
The ability to tell where sound sources are in space is ecologically important for spatial awareness and communication in multisource environments. While hearing aids and cochlear implants (CIs) can support spatial hearing for some users, this ability is not routinely assessed clinically. The present study compared sound source localization for a 200-ms speech-shaped noise presented using real sources at 18° intervals from -54° to +54° azimuth and virtual sources that were simulated using amplitude panning with sources at -54° and +54°. Participants were 34 adult CI or electric-acoustic stimulation users, including individuals with single-sided deafness or aided acoustic hearing. The pattern of localization errors by participant was broadly similar for real and virtual sources, with some modest differences. For example, the root mean square (RMS) error for these two conditions was correlated at r = .89 (p < .001), with a mean RMS elevation of 3.9° for virtual sources. These results suggest that sound source localization with two-speaker amplitude panning may provide clinically useful information when testing with real sources is infeasible.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Margaret E. Richter
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Amanda D. Sloop
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Margaret T. Dillon
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| |
Collapse
|
2
|
Snir A, Cieśla K, Vekslar R, Amedi A. Highly compromised auditory spatial perception in aided congenitally hearing-impaired and rapid improvement with tactile technology. iScience 2024; 27:110808. [PMID: 39290844 PMCID: PMC11407022 DOI: 10.1016/j.isci.2024.110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/11/2024] [Accepted: 08/21/2024] [Indexed: 09/19/2024] Open
Abstract
Spatial understanding is a multisensory construct while hearing is the only natural sense enabling the simultaneous perception of the entire 3D space. To test whether such spatial understanding is dependent on auditory experience, we study congenitally hearing-impaired users of assistive devices. We apply an in-house technology, which, inspired by the auditory system, performs intensity-weighting to represent external spatial positions and motion on the fingertips. We see highly impaired auditory spatial capabilities for tracking moving sources, which based on the "critical periods" theory emphasizes the role of nature in sensory development. Meanwhile, for tactile and audio-tactile spatial motion perception, the hearing-impaired show performance similar to typically hearing individuals. The immediate availability of 360° external space representation through touch, despite the lack of such experience during the lifetime, points to the significant role of nurture in spatial perception development, and to its amodal character. The findings show promise toward advancing multisensory solutions for rehabilitation.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| |
Collapse
|
3
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024; 45:969-984. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
4
|
Alemu RZ, Papsin BC, Harrison RV, Blakeman A, Gordon KA. Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization. Trends Hear 2024; 28:23312165231217910. [PMID: 38297817 PMCID: PMC10832417 DOI: 10.1177/23312165231217910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 10/17/2023] [Accepted: 11/14/2023] [Indexed: 02/02/2024] Open
Abstract
The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.
Collapse
Affiliation(s)
- Robel Z. Alemu
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
| | - Blake C. Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Robert V. Harrison
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Al Blakeman
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A. Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
- Department of Communication Disorders, The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
5
|
Zhang H, Xie J, Tao Q, Xiao Y, Cui G, Fang W, Zhu X, Xu G, Li M, Han C. The effect of motion frequency and sound source frequency on steady-state auditory motion evoked potential. Hear Res 2023; 439:108897. [PMID: 37871451 DOI: 10.1016/j.heares.2023.108897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/18/2023] [Accepted: 10/12/2023] [Indexed: 10/25/2023]
Abstract
The ability of humans to perceive motion sound sources is important for accurate response to the living environment. Periodic motion sound sources can elicit steady-state motion auditory evoked potential (SSMAEP). The purpose of this study was to investigate the effects of different motion frequencies and different frequencies of sound source on SSMAEP. The stimulation paradigms for simulating periodic motion of sound sources were designed utilizing head-related transfer function (HRTF) techniques in this study. The motion frequencies of the paradigm are set respectively to 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. In addition, the frequencies of sound source of the paradigms were set to 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, and 4000 Hz at motion frequencies of 6 Hz and 40 Hz. Fourteen subjects with normal hearing were recruited for the study. SSMAEP was elicited by 500 Hz pure tone at motion frequencies of 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. SSMAEP was strongest at motion frequencies of 6 Hz. Moreover, at 6 Hz motion frequency, the SSMAEP amplitude was largest at the tone frequency of 500 Hz and smallest at 4000 Hz. Whilst SSMAEP elicited by 4000 Hz pure tone was significantly the strongest at motion frequency of 40 Hz. SSMAEP can be elicited by periodic motion sound sources at motion frequencies up to 80 Hz. SSMAEP also has a strong response at lower frequency. Low-frequency pure tones are beneficial to enhance SSMAEP at low-frequency sound source motion, whilst high-frequency pure tones help to enhance SSMAEP at high-frequency sound source motion. The study provides new insight into the brain's perception of rhythmic auditory motion.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China.
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Wenhu Fang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xinyu Zhu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chengcheng Han
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
6
|
Zhang H, Xie J, Xiao Y, Cui G, Xu G, Tao Q, Gebrekidan YY, Yang Y, Ren Z, Li M. Steady-state auditory motion based potentials evoked by intermittent periodic virtual sound source and the effect of auditory noise on EEG enhancement. Hear Res 2023; 428:108670. [PMID: 36563411 DOI: 10.1016/j.heares.2022.108670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Hearing is one of the most important human perception forms, and humans can capture the movement of sound in complex environments. On the basis of this phenomenon, this study explored the possibility of eliciting a steady-state brain response in an intermittent periodic motion sound source. In this study, a novel discrete continuous and orderly change of sound source positions stimulation paradigm was designed based on virtual sound using head-related transfer functions (HRTFs). And then the auditory motion stimulation paradigms with different noise levels were designed by changing the signal-to-noise ratio (SNR). The characteristics of brain response and the effects of different noises on brain response were studied by analyzing electroencephalogram (EEG) signals evoked by the proposed stimulation. Experimental results showed that the proposed paradigm could elicit a novel steady-state auditory evoked potential (AEP), i.e., steady-state motion auditory evoked potential (SSMAEP). And moderate noise could enhance SSMAEP amplitude and corresponding brain connectivity. This study enriches the types of AEPs and provides insights into the mechanism of brain processing of motion sound sources and the impact of noise on brain processing.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China.
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China
| | | | - Yuzhe Yang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhiyuan Ren
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
7
|
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
8
|
Hilmi Che Hassan MN, Zakaria MN, Wan Mohamad WN. Development and Validation of a Virtual Moving Auditory Localization (vMAL) Test among Healthy Children. INTERNATIONAL JOURNAL OF STATISTICS IN MEDICAL RESEARCH 2022; 11:162-168. [DOI: 10.6000/1929-6029.2022.11.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2024]
Abstract
Introduction: The ability to localize sound sources is crucial for humans. Due to specific hearing disorders, the affected individuals may have problems to accurately locate the sound sources, leading to other unwanted consequences. Nevertheless, a simple auditory localization test (that employs moving auditory stimuli) is currently lacking in clinical settings. Essentially, the objectives of the present study were to develop a virtual moving auditory localization (vMAL) test that is suitable for assessing children and assess the validity and the reliability of this test. Materials and Methods: This study consisted of two consecutive phases. In phase 1, the required stimulus and the test set up for the vMAL test were established. Two loudspeakers were employed to produce five virtual positions, and eight different moving conditions were constructed. In phase 2, 24 normal-hearing Malaysian children (aged 7-12 years) underwent the vMAL test. The validity and the reliability of this test were then assessed using several validation measures. Fleiss Kappa and Spearman correlation analyses were used to analyse the obtained data. Results: The vMAL test was found to have good convergent validity (kappa = 0.64) and good divergent validity (kappa = -0.06). Based on the item-total correlation and Spearman coefficient rho results, this test was found to have good internal reliability (rho = 0.36-0.75) and excellent external (test-retest) reliability (rho = 0.99). Conclusions: in this study a new vMAL test was developed and proven to be valid and reliable accordingly for its intended applications. This test can be useful in clinical settings since it is simple to administer, cost-effective, does not take up much room, and can assess auditory localization performance in children. The outcomes of the present study may serve as preliminary normative data as well as guidelines for future auditory localization research.
Collapse
|
9
|
Fernandez J, Sivonen V, Pulkki V. Investigating Bilateral Cochlear Implant Users' Localization of Amplitude- and Time-Panned Stimuli Produced Over a Limited Loudspeaker Arrangement. Am J Audiol 2022; 31:143-154. [PMID: 35130033 DOI: 10.1044/2021_aja-21-00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVE The objective of this study was to investigate the localization ability of bilateral cochlear implant (BiCI) users for virtual sound sources produced over a limited loudspeaker arrangement. DESIGN Ten BiCI users and 10 normal-hearing subjects participated in listening tests in which amplitude- and time-panned virtual sound sources were produced over a limited loudspeaker setup with varying azimuth angles. Three stimuli were utilized: speech, bandpassed pink noise between 20 Hz and 1 kHz, and bandpassed pink noise between 1 kHz and 8 kHz. The data were collected via a two-alternative forced-choice procedure and used to calculate the minimum audible angle (MAA) of each subject, which was subsequently compared to the results of previous studies in which real sound sources were employed. RESULT The median MAAs of the amplitude-panned speech, low-frequency pink noise, and high-frequency pink noise stimuli for the BiCI group were calculated to be 20°, 38°, and 12°, respectively. For the time-panned stimuli, the MAAs of the BiCI group for all three stimuli were calculated to be close to the upper limit of the listening test. CONCLUSIONS The computed MAAs of the BiCI group for amplitude-panned speech were marginally larger than BiCI users' previously reported MAAs for real sound sources, whereas their computed MAAs for the time-panned stimuli were significantly larger. Subsequent statistical analysis indicated a statistically significant difference in the performances of the BiCI group in localizing the amplitude-panned sources and the time-panned sources. It follows that time-panning over limited loudspeaker arrangements may not be a useful clinical tool, whereas amplitude-panning utilizing such a setup may be further explored as such. Additionally, a comparison with the patient demographics indicated correlations between the results and the patients' age at time of diagnoses and the time passed between date of diagnosis and their implant surgeries.
Collapse
Affiliation(s)
- Janani Fernandez
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| | - Ville Sivonen
- Head and Neck Center, Department of Otorhinolaryngology—Head and Neck Surgery, Helsinki University Hospital and University of Helsinki, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| |
Collapse
|
10
|
Omnidirectional Haptic Guidance for the Hearing Impaired to Track Sound Sources. SIGNALS 2021. [DOI: 10.3390/signals2030030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
We developed a hearing assistance system that enables hearing-impaired people to track the horizontal movement of a single sound source. The movement of the sound source is presented to the subject by vibrating vibrators on both shoulders according to the distance to and direction of the sound source, which are estimated from the acoustic signals detected by microphones attached to both ears. We presented the direction of and distance to the sound source to the subject by changing the ratio of the intensity of the two vibrators according to the direction and by increasing the intensity the closer the person got to the sound source. The subject could recognize the approaching sound source as a change in the vibration intensity by turning their face in the direction where the intensity of both vibrators was equal. The direction of the moving sound source can be tracked with an accuracy of less than 5° when an analog vibration pattern is added to indicate the direction of the sound source. By presenting the direction of the sound source with high accuracy, it is possible to show subjects the approach and departure of a sound source.
Collapse
|
11
|
Fischer T, Schmid C, Kompis M, Mantokoudis G, Caversaccio M, Wimmer W. Effects of temporal fine structure preservation on spatial hearing in bilateral cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:673. [PMID: 34470279 DOI: 10.1121/10.0005732] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 07/02/2021] [Indexed: 06/13/2023]
Abstract
Typically, the coding strategies of cochlear implant audio processors discard acoustic temporal fine structure information (TFS), which may be related to the poor perception of interaural time differences (ITDs) and the resulting reduced spatial hearing capabilities compared to normal-hearing individuals. This study aimed to investigate to what extent bilateral cochlear implant (BiCI) recipients can exploit ITD cues provided by a TFS preserving coding strategy (FS4) in a series of sound field spatial hearing tests. As a baseline, we assessed the sensitivity to ITDs and binaural beats of 12 BiCI subjects with a coding strategy disregarding fine structure (HDCIS) and the FS4 strategy. For 250 Hz pure-tone stimuli but not for broadband noise, the BiCI users had significantly improved ITD discrimination using the FS4 strategy. In the binaural beat detection task and the broadband sound localization, spatial discrimination, and tracking tasks, no significant differences between the two tested coding strategies were observed. These results suggest that ITD sensitivity did not generalize to broadband stimuli or sound field spatial hearing tests, suggesting that it would not be useful for real-world listening.
Collapse
Affiliation(s)
- T Fischer
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - C Schmid
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - M Kompis
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - G Mantokoudis
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - M Caversaccio
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| | - W Wimmer
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse, 3010 Bern, Switzerland
| |
Collapse
|
12
|
Dwyer RT, Chen C, Hehrmann P, Dwyer NC, Gifford RH. Synchronized Automatic Gain Control in Bilateral Cochlear Implant Recipients Yields Significant Benefit in Static and Dynamic Listening Conditions. Trends Hear 2021; 25:23312165211014139. [PMID: 34027718 PMCID: PMC8150445 DOI: 10.1177/23312165211014139] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 03/26/2021] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Individuals with bilateral cochlear implants (BiCIs) rely mostly on interaural level difference (ILD) cues to localize stationary sounds in the horizontal plane. Independent automatic gain control (AGC) in each device can distort this cue, resulting in poorer localization of stationary sound sources. However, little is known about how BiCI listeners perceive sound in motion. In this study, 12 BiCI listeners' spatial hearing abilities were assessed for both static and dynamic listening conditions when the sound processors were synchronized by applying the same compression gain to both devices as a means to better preserve the original ILD cues. Stimuli consisted of band-pass filtered (100-8000 Hz) Gaussian noise presented at various locations or panned over an array of loudspeakers. In the static listening condition, the distance between two sequentially presented stimuli was adaptively varied to arrive at the minimum audible angle, the smallest spatial separation at which the listener can correctly determine whether the second sound was to the left or right of the first. In the dynamic listening condition, participants identified if a single stimulus moved to the left or to the right. Velocity was held constant and the distance the stimulus traveled was adjusted using an adaptive procedure to determine the minimum audible movement angle. Median minimum audible angle decreased from 17.1° to 15.3° with the AGC synchronized. Median minimum audible movement angle decreased from 100° to 25.5°. These findings were statistically significant and support the hypothesis that synchronizing the AGC better preserves ILD cues and results in improved spatial hearing abilities. However, restoration of the ILD cue alone was not enough to bridge the large performance gap between BiCI listeners and normal-hearing listeners on these static and dynamic spatial hearing measures.
Collapse
Affiliation(s)
- Robert T. Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Chen Chen
- Research and Technology, Advanced Bionics, LLC, Valencia, California, United States
| | - Phillipp Hehrmann
- Research and Technology, Advanced Bionics, LLC, Valencia, California, United States
| | - Nichole C. Dwyer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
13
|
Warnecke M, Peng ZE, Litovsky RY. The impact of temporal fine structure and signal envelope on auditory motion perception. PLoS One 2020; 15:e0238125. [PMID: 32822439 PMCID: PMC7446836 DOI: 10.1371/journal.pone.0238125] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 08/10/2020] [Indexed: 02/02/2023] Open
Abstract
The majority of psychoacoustic research investigating sound localization has utilized stationary sources, yet most naturally occurring sounds are in motion, either because the sound source itself moves, or the listener does. In normal hearing (NH) listeners, previous research showed the extent to which sound duration and velocity impact the ability of listeners to detect sound movement. By contrast, little is known about how listeners with hearing impairments perceive moving sounds; the only study to date comparing the performance of NH and bilateral cochlear implant (BiCI) listeners has demonstrated significantly poorer performance on motion detection tasks in BiCI listeners. Cochlear implants, auditory protheses offered to profoundly deaf individuals for access to spoken language, retain the signal envelope (ENV), while discarding temporal fine structure (TFS) of the original acoustic input. As a result, BiCI users do not have access to low-frequency TFS cues, which have previously been shown to be crucial for sound localization in NH listeners. Instead, BiCI listeners seem to rely on ENV cues for sound localization, especially level cues. Given that NH and BiCI listeners differentially utilize ENV and TFS information, the present study aimed to investigate the usefulness of these cues for auditory motion perception. We created acoustic chimaera stimuli, which allowed us to test the relative contributions of ENV and TFS to auditory motion perception. Stimuli were either moving or stationary, presented to NH listeners in free field. The task was to track the perceived sound location. We found that removing low-frequency TFS reduces sensitivity to sound motion, and fluctuating speech envelopes strongly biased the judgment of sounds to be stationary. Our findings yield a possible explanation as to why BiCI users struggle to identify sound motion, and provide a first account of cues important to the functional aspect of auditory motion perception.
Collapse
Affiliation(s)
- Michaela Warnecke
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| | - Z. Ellen Peng
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| | - Ruth Y. Litovsky
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| |
Collapse
|
14
|
Pinna-Imitating Microphone Directionality Improves Sound Localization and Discrimination in Bilateral Cochlear Implant Users. Ear Hear 2020; 42:214-222. [PMID: 32701730 PMCID: PMC7757747 DOI: 10.1097/aud.0000000000000912] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES To compare the sound-source localization, discrimination, and tracking performance of bilateral cochlear implant users with omnidirectional (OMNI) and pinna-imitating (PI) microphone directionality modes. DESIGN Twelve experienced bilateral cochlear implant users participated in the study. Their audio processors were fitted with two different programs featuring either the OMNI or PI mode. Each subject performed static and dynamic sound field spatial hearing tests in the horizontal plane. The static tests consisted of an absolute sound localization test and a minimum audible angle test, which was measured at eight azimuth directions. Dynamic sound tracking ability was evaluated by the subject correctly indicating the direction of a moving stimulus along two circular paths around the subject. RESULTS PI mode led to statistically significant sound localization and discrimination improvements. For static sound localization, the greatest benefit was a reduction in the number of front-back confusions. The front-back confusion rate was reduced from 47% with OMNI mode to 35% with PI mode (p = 0.03). The ability to discriminate sound sources straight to the sides (90° and 270° angle) was only possible with PI mode. The averaged minimum audible angle value for the 90° and 270° angle positions decreased from a 75.5° to a 37.7° angle when PI mode was used (p < 0.001). Furthermore, a non-significant trend towards an improvement in the ability to track moving sound sources was observed for both trajectories tested (p = 0.34 and p = 0.27). CONCLUSIONS Our results demonstrate that PI mode can lead to improved spatial hearing performance in bilateral cochlear implant users, mainly as a consequence of improved front-back discrimination with PI mode.
Collapse
|