1
|
Fernandez J, McCormack L, Hyvärinen P, Kressner AA. Investigating sound-field reproduction methods as perceived by bilateral hearing aid users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1492-1502. [PMID: 38376347 DOI: 10.1121/10.0024875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 01/26/2024] [Indexed: 02/21/2024]
Abstract
A perceptual study was conducted to investigate the perceived accuracy of two sound-field reproduction approaches when experienced by hearing-impaired (HI) and normal-hearing (NH) listeners. The methods under test were traditional signal-independent Ambisonics reproduction and a parametric signal-dependent alternative, which were both rendered at different Ambisonic orders. The experiment was repeated in two different rooms: (1) an anechoic chamber, where the audio was delivered over an array of 44 loudspeakers; (2) an acoustically-treated listening room with a comparable setup, which may be more easily constructed within clinical settings. Ten bilateral hearing aid users, with mild to moderate symmetric hearing loss, wearing their devices, and 15 NH listeners were asked to rate the methods based upon their perceived similarity to simulated reference conditions. In the majority of cases, the results indicate that the parametric reproduction method was rated as being more similar to the reference conditions than the signal-independent alternative. This trend is evident for both groups, although the variation in responses was notably wider for the HI group. Furthermore, generally similar trends were observed between the two listening environments for the parametric method. The signal-independent approach was instead rated as being more similar to the reference in the listening room.
Collapse
Affiliation(s)
- Janani Fernandez
- Department of Information and Communications Engineering, Aalto University, Espoo, Finland
| | - Leo McCormack
- Department of Information and Communications Engineering, Aalto University, Espoo, Finland
| | - Petteri Hyvärinen
- Department of Information and Communications Engineering, Aalto University, Espoo, Finland
| | - Abigail Anne Kressner
- Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
2
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
3
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
4
|
Serafin S, Adjorlu A, Percy-Smith LM. A Review of Virtual Reality for Individuals with Hearing Impairments. MULTIMODAL TECHNOLOGIES AND INTERACTION 2023. [DOI: 10.3390/mti7040036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
Abstract
Virtual Reality (VR) technologies have the potential to be applied in a clinical context to improve training and rehabilitation for individuals with hearing impairment. The introduction of such technologies in clinical audiology is in its infancy and requires devices that can be taken out of laboratory settings as well as a solid collaboration between researchers and clinicians. In this paper, we discuss the state of the art of VR in audiology with applications to measurement and monitoring of hearing loss, rehabilitation, and training, as well as the development of assistive technologies. We review papers that utilize VR delivered through a head-mounted display (HMD) and used individuals with hearing impairment as test subjects, or presented solutions targeted at individuals with hearing impairments, discussing their goals and results, and analyzing how VR can be a useful tool in hearing research. The review shows the potential of VR in testing and training individuals with hearing impairment, as well as the need for more research and applications in this domain.
Collapse
|
5
|
Audiovisual Training in Virtual Reality Improves Auditory Spatial Adaptation in Unilateral Hearing Loss Patients. J Clin Med 2023; 12:jcm12062357. [PMID: 36983357 PMCID: PMC10058351 DOI: 10.3390/jcm12062357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/22/2023] Open
Abstract
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.
Collapse
|
6
|
Gessa E, Giovanelli E, Spinella D, Verdelet G, Farnè A, Frau GN, Pavani F, Valzolgher C. Spontaneous head-movements improve sound localization in aging adults with hearing loss. Front Hum Neurosci 2022; 16:1026056. [PMID: 36310849 PMCID: PMC9609159 DOI: 10.3389/fnhum.2022.1026056] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 09/21/2022] [Indexed: 11/04/2023] Open
Abstract
Moving the head while a sound is playing improves its localization in human listeners, in children and adults, with or without hearing problems. It remains to be ascertained if this benefit can also extend to aging adults with hearing-loss, a population in which spatial hearing difficulties are often documented and intervention solutions are scant. Here we examined performance of elderly adults (61-82 years old) with symmetrical or asymmetrical age-related hearing-loss, while they localized sounds with their head fixed or free to move. Using motion-tracking in combination with free-field sound delivery in visual virtual reality, we tested participants in two auditory spatial tasks: front-back discrimination and 3D sound localization in front space. Front-back discrimination was easier for participants with symmetrical compared to asymmetrical hearing-loss, yet both groups reduced their front-back errors when head-movements were allowed. In 3D sound localization, free head-movements reduced errors in the horizontal dimension and in a composite measure that computed errors in 3D space. Errors in 3D space improved for participants with asymmetrical hearing-impairment when the head was free to move. These preliminary findings extend to aging adults with hearing-loss the literature on the advantage of head-movements on sound localization, and suggest that the disparity of auditory cues at the two ears can modulate this benefit. These results point to the possibility of taking advantage of self-regulation strategies and active behavior when promoting spatial hearing skills.
Collapse
Affiliation(s)
- Elena Gessa
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Giovanelli
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
| | | | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | - Alessandro Farnè
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
- Neuro-immersion, Centre de Recherche en Neuroscience de Lyon, Lyon, France
| | | | - Francesco Pavani
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| | - Chiara Valzolgher
- Center for Mind/Brian Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team-IMPACT, Centre de Recherche en Neuroscience de Lyon, University Lyon 1, Lyon, France
| |
Collapse
|