1
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
2
|
Sound Localization Ability in Dogs. Vet Sci 2022; 9:vetsci9110619. [DOI: 10.3390/vetsci9110619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022] Open
Abstract
The minimum audible angle (MAA), defined as the smallest detectable difference between the azimuths of two identical sources of sound, is a standard measure of spatial auditory acuity in animals. Few studies have explored the MAA of dogs, using methods that do not allow potential improvement throughout the assessment, and with a very small number of dog(s) assessed. To overcome these limits, we adopted a staircase method on 10 dogs, using a two-forced choice procedure with two sound sources, testing angles of separation from 60° to 1°. The staircase method permits the level of difficulty for each dog to be continuously adapted and allows for the observation of improvement over time. The dogs’ average MAA was 7.6°, although with a large interindividual variability, ranging from 1.3° to 13.2°. A global improvement was observed across the procedure, substantiated by a gradual lowering of the MAA and of choice latency across sessions. The results indicate that the staircase method is feasible and reliable in the assessment of auditory spatial localization in dogs, highlighting the importance of using an appropriate method in a sensory discrimination task, so as to allow improvement over time. The results also reveal that the MAA of dogs is more variable than previously reported, potentially reaching values lower than 2°. Although no clear patterns of association emerged between MAA and dogs’ characteristics such as ear shape, head shape or age, the results suggest the value of conducting larger-scale studies to determine whether these or other factors influence sound localization abilities in dogs.
Collapse
|
3
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
4
|
Robotham T, Rummukainen OS, Kurz M, Eckert M, Habets EAP. Comparing Direct and Indirect Methods of Audio Quality Evaluation in Virtual Reality Scenes of Varying Complexity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2091-2101. [PMID: 35167464 DOI: 10.1109/tvcg.2022.3150491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Many quality evaluation methods are used to assess uni-modal audio or video content without considering perceptual, cognitive, and interactive aspects present in virtual reality (VR) settings. Consequently, little is known regarding the repercussions of the employed evaluation method, content, and subject behavior on the quality ratings in VR. This mixed between- and within-subjects study uses four subjective audio quality evaluation methods (viz. multiple-stimulus with and without reference for direct scaling, and rank-order elimination and pairwise comparison for indirect scaling) to investigate the contributing factors present in multi-modal 6-DoF VR on quality ratings of real-time audio rendering. For each between-subjects employed method, two sets of conditions in five VR scenes were evaluated within-subjects. The conditions targeted relevant attributes for binaural audio reproduction using scenes with various amounts of user interactivity. Our results show all referenceless methods produce similar results using both condition sets. However, rank-order elimination proved to be the fastest method, required the least amount of repetitive motion, and yielded the highest discrimination between spatial conditions. Scene complexity was found to be a main effect within results, with behavioral and task load index results implying more complex scenes and interactive aspects of 6-DoF VR can impede quality judgments.
Collapse
|
5
|
Honda A, Tsunokake S, Suzuki Y, Sakamoto S. Auditory Subjective-Straight-Ahead Blurs during Significantly Slow Passive Body Rotation. Iperception 2022; 13:20416695211070616. [PMID: 35024134 PMCID: PMC8744180 DOI: 10.1177/20416695211070616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 12/15/2021] [Indexed: 11/15/2022] Open
Abstract
This paper reports on the deterioration in sound-localization accuracy during listeners' head and body movements. We investigated the sound-localization accuracy during passive body rotations at speeds in the range of 0.625-5 °/s. Participants were asked to determine whether a 30-ms noise stimuli emerged relative to their subjective-straight-ahead reference. Results indicated that the sound-localization resolution degraded with passive rotation, irrespective of the rotation speed, even at speeds of 0.625 °/s.
Collapse
Affiliation(s)
- Akio Honda
- Department of Information Design, Faculty of Informatics, Shizuoka Institute of Science and Technology, Fukuroi, Japan
| | - Sayaka Tsunokake
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Yôiti Suzuki
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Shuichi Sakamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| |
Collapse
|
6
|
Neidhardt A, Schneiderwind C, Klein F. Perceptual Matching of Room Acoustics for Auditory Augmented Reality in Small Rooms - Literature Review and Theoretical Framework. Trends Hear 2022; 26:23312165221092919. [PMID: 35505625 PMCID: PMC9073123 DOI: 10.1177/23312165221092919] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
For the realization of auditory augmented reality (AAR), it is important that the room acoustical properties of the virtual elements are perceived in agreement with the acoustics of the actual environment. This perceptual matching of room acoustics is the subject reviewed in this paper. Realizations of AAR that fulfill the listeners’ expectations were achieved based on pre-characterization of the room acoustics, for example, by measuring acoustic impulse responses or creating detailed room models for acoustic simulations. For future applications, the goal is to realize an online adaptation in (close to) real-time. Perfect physical matching is hard to achieve with these practical constraints. For this reason, an understanding of the essential psychoacoustic cues is of interest and will help to explore options for simplifications. This paper reviews a broad selection of previous studies and derives a theoretical framework to examine possibilities for psychoacoustical optimization of room acoustical matching.
Collapse
|
7
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2811-2824. [PMID: 34100627 PMCID: PMC8632503 DOI: 10.1044/2021_jslhr-20-00493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 12/31/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.
Collapse
|
8
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
9
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
10
|
Yost WA, Pastore MT, Pulling KR. Sound-source localization as a multisystem process: The Wallach azimuth illusion. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:382. [PMID: 31370595 PMCID: PMC6656578 DOI: 10.1121/1.5116003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 06/13/2019] [Accepted: 06/17/2019] [Indexed: 06/10/2023]
Abstract
Wallach [J. Exp. Psychol. 27, 339-368 (1940)] described a "2-1" rotation scenario in which a sound source rotates on an azimuth circle around a rotating listener at twice the listener's rate of rotation. In this scenario, listeners often perceive an illusionary stationary sound source, even though the actual sound source is rotating. This Wallach Azimuth Illusion (WAI) was studied to explore Wallach's description of sound-source localization as a required interaction of binaural and head-position cues (i.e., sound-source localization is a multisystem process). The WAI requires front-back reversed sound-source localization. To extend and consolidate the current understanding of the WAI, listeners and sound sources were rotated over large distances and long time periods, which had not been done before. The data demonstrate a strong correlation between measures of the predicted WAI locations and front-back reversals (FBRs). When sounds are unlikely to elicit FBRs, sound sources are perceived veridically as rotating, but the results are listener dependent. Listeners' eyes were always open and there was little evidence under these conditions that changes in vestibular function affected the occurrence of the WAI. The results show that the WAI is a robust phenomenon that should be useful for further exploration of sound-source localization as a multisystem process.
Collapse
Affiliation(s)
- William A Yost
- Spatial Hearing Laboratory, College of Health Solutions, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - M Torben Pastore
- Spatial Hearing Laboratory, College of Health Solutions, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Kathryn R Pulling
- Spatial Hearing Laboratory, College of Health Solutions, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|
11
|
Kates JM, Arehart KH, Harvey LO. Integrating a remote microphone with hearing-aid processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3551. [PMID: 31255148 DOI: 10.1121/1.5111339] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 05/22/2019] [Indexed: 06/09/2023]
Abstract
A remote microphone (RM) links a talker's microphone to a listener's hearing aids (HAs). The RM improves intelligibility in noise and reverberation, but the binaural cues necessary for externalization are lost. Augmenting the RM signal with synthesized binaural cues and early reflections enhances externalization, but interactions of the RM signal with the HA processing could reduce its effectiveness. These potential interactions were evaluated using RM plus HA processing in a realistic listening simulation. The HA input was the RM alone, the augmented RM signal, the acoustic inputs at the HA microphones, including reverberation measured using a dummy head, or a mixture of the augmented RM and acoustic input signals. The HA simulation implemented linear amplification or independent dynamic-range compression at the two ears and incorporated the acoustic effects of vented earmolds. Hearing-impaired listeners scored sentence stimuli for intelligibility and rated clarity, overall quality, externalization, and apparent source width. Using the RM improved intelligibility but reduced the spatial impression. Increasing the vent diameter reduced clarity and increased the spatial impression. Listener ratings reflect a trade-off between the attributes of clarity and overall quality and the attributes of externalization and source width that can be explained using the interaural cross correlation.
Collapse
Affiliation(s)
- James M Kates
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Kathryn H Arehart
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Lewis O Harvey
- Department of Psychology and Neuroscience, University of Colorado, Boulder, Colorado 80309, USA
| |
Collapse
|
12
|
Kates JM, Arehart KH, Muralimanohar RK, Sommerfeldt K. Externalization of remote microphone signals using a structural binaural model of the head and pinna. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:2666. [PMID: 29857749 DOI: 10.1121/1.5032326] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In a remote microphone (RM) system, a talker speaks into a microphone and the signal is transmitted to the hearing aids worn by the hearing-impaired listener. A difficulty with remote microphones, however, is that the signal received at the hearing aid bypasses the head and pinna, so the acoustic cues needed to externalize the sound source are missing. The objective of this paper is to process the RM signal to improve externalization when listening through earphones. The processing is based on a structural binaural model, which uses a cascade of processing modules to simulate the interaural level difference, interaural time difference, pinna reflections, ear-canal resonance, and early room reflections. The externalization results for the structural binaural model are compared to a left-right signal blend, the listener's own anechoic head-related impulse response (HRIR), and the listener's own HRIR with room reverberation. The azimuth is varied from straight ahead to 90° to one side. The results show that the structural binaural model is as effective as the listener's own HRIR plus reverberation in producing an externalized acoustic image, and that there is no significant difference in externalization between hearing-impaired and normal-hearing listeners.
Collapse
Affiliation(s)
- James M Kates
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Kathryn H Arehart
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Ramesh Kumar Muralimanohar
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Kristin Sommerfeldt
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| |
Collapse
|
13
|
Jóhannesson ÓI, Balan O, Unnthorsson R, Moldoveanu A, Kristjánsson Á. The Sound of Vision Project: On the Feasibility of an Audio-Haptic Representation of the Environment, for the Visually Impaired. Brain Sci 2016; 6:brainsci6030020. [PMID: 27355966 PMCID: PMC5039449 DOI: 10.3390/brainsci6030020] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/18/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects.
Collapse
Affiliation(s)
- Ómar I Jóhannesson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Oana Balan
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Árni Kristjánsson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| |
Collapse
|
14
|
Yost WA, Zhong X, Najam A. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3293-310. [PMID: 26627802 DOI: 10.1121/1.4935091] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Collapse
Affiliation(s)
- William A Yost
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Xuan Zhong
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Anbar Najam
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|
15
|
Sakurai K, Kitagawa N, Suzuki Y. An introduction to the special issue on Multisensory Perception. Iperception 2013; 4:211-2. [PMID: 24349681 PMCID: PMC3859564 DOI: 10.1068/ied0404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Affiliation(s)
- Kenzo Sakurai
- Department of Psychology, Tohoku Gakuin University, 2-1-1 Tenjinzawa, Izumi-ku, Sendai 981-3193, Japan; e-mail:
| | - Norimichi Kitagawa
- Human Information Science Laboratory, NTT Communication Science Laboratories, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan; e-mail:
| | - Yôiti Suzuki
- Tohoku University, Research Institute of Electrical Communication, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan; e-mail:
| |
Collapse
|