1
|
Valzolgher C, Rosi T, Ghiselli S, Cuda D, Gullotta J, Zanetti D, Lilli G, Di Berardino F, Pozzi M, Ciorba A, Brunelli N, Musumano LB, Pavani F. Active listening modulates the spatial hearing experience: a multicentric study. Exp Brain Res 2024; 243:15. [PMID: 39636399 DOI: 10.1007/s00221-024-06955-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 10/28/2024] [Indexed: 12/07/2024]
Abstract
Although flexible and portable virtual reality technologies have simplified measuring participants' perception of acoustic space, their clinical adoption remains limited, often lacking ecological fidelity. In clinical practice, participants are typically instructed to remain still when testing sound localization, whereas head movements are crucial in daily life. Additionally, assessing spatial hearing extends beyond measuring accuracy to include meta-cognitive evaluations like perceived effort and confidence, which are rarely adopted. Our study hypothesized that allowing head movement during sound localization, compared to a static head condition, would reduce perceived listening effort and enhance confidence in normal hearing participants. Conducted across three audiology and otology hospital services in Northern Italy, the study involved personnel inexperienced with our VR equipment. This also tested the feasibility and usability of our VR approach in clinical settings. Results showed that head movements reduced subjective effort but did not significantly affect perceived confidence. However, during the active condition, participants reporting higher confidence exhibited less head movement and explored the space less. Similarly, those with less head movement reported lower listening effort. These findings underscore the importance of allowing natural posture to capture the full extent of spatial hearing capabilities and the value of including metacognitive evaluations in assessing performance. Our use of affordable, off-the-shelf VR equipment effectively measured spatial hearing in clinical settings, providing a flexible alternative to current static systems. This approach highlights the potential for more dynamic and comprehensive assessments in clinical audiology.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | | | - Sara Ghiselli
- Department of Otolaryngology, AUSL Piacenza, Piacenza, Italy
| | - Domenico Cuda
- University of Parma, Parma, Italy
- Audiology Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milano, Italy
| | | | - Diego Zanetti
- Audiology Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milano, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Milano, Italy
| | - Giorgio Lilli
- Audiology Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Federica Di Berardino
- Audiology Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milano, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Milano, Italy
| | - Marco Pozzi
- Audiology Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milano, Italy
| | - Andrea Ciorba
- ENT and Audiology Unit, Department of Neurosciences and Rehabilitation, University Hospital of Ferrara, Ferrara, Italy
| | - Nicola Brunelli
- ENT and Audiology Unit, Department of Neurosciences and Rehabilitation, University Hospital of Ferrara, Ferrara, Italy
| | - Lucia Belen Musumano
- ENT and Audiology Unit, Department of Neurosciences and Rehabilitation, University Hospital of Ferrara, Ferrara, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario Di Ricerca "Cognizione, Linguaggio E Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
2
|
Rogalla MM, Quass GL, Yardley H, Martinez-Voigt C, Ford AN, Wallace G, Dileepkumar D, Corfas G, Apostolides PF. Population coding of auditory space in the dorsal inferior colliculus persists with altered binaural cues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.13.612867. [PMID: 39314270 PMCID: PMC11419156 DOI: 10.1101/2024.09.13.612867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Sound localization is critical for real-world hearing, such as segregating overlapping sound streams. For optimal flexibility, central representations of auditory space must adapt to peripheral changes in binaural cue availability, such as following asymmetric hearing loss in adulthood. However, whether the mature auditory system can reliably encode spatial auditory representations upon abrupt changes in binaural input is unclear. Here we use 2-photon Ca2+ imaging in awake head-fixed mice to determine how the higher-order "shell" layers of the inferior colliculus (IC) encode sound source location in the frontal azimuth, under binaural conditions and after acute monaural hearing loss induced by an ear plug ipsilateral to the imaged hemisphere. Spatial receptive fields were typically broad and not exclusively contralateral: Neurons responded reliably to multiple positions in the contra- and ipsi-lateral hemifields, with preferred positions tiling the entire frontal azimuth. Ear plugging broadened receptive fields and reduced spatial selectivity in a subset of neurons, in agreement with an inhibitory influence of ipsilateral sounds. However ear plugging also enhanced spatial tuning and/or unmasked receptive fields in other neurons, shifting the distribution of preferred angles ipsilaterally with minimal impact on the neuronal population's overall spatial resolution; these effects occurred within 2 hours of ear plugging. Consequently, linear classifiers trained on fluorescence data from control and ear-plugged conditions had similar classification accuracy when tested on held out data from within, but not across hearing conditions. Spatially informative neuronal population codes therefore arise rapidly following monaural hearing loss, in absence of overt experience.
Collapse
Affiliation(s)
- Meike M. Rogalla
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Gunnar L. Quass
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Harry Yardley
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Clara Martinez-Voigt
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Alexander N. Ford
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Gunseli Wallace
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Deepak Dileepkumar
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Gabriel Corfas
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
| | - Pierre F. Apostolides
- Kresge Hearing Research Institute & Department of Otolaryngology – Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, United States
- Department of Molecular and Integrative Physiology, University of Michigan Medical School, Ann Arbor, United States
| |
Collapse
|
3
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
4
|
Cervantes Constantino F, Sánchez-Costa T, Cipriani GA, Carboni A. Visuospatial attention revamps cortical processing of sound amid audiovisual uncertainty. Psychophysiology 2023; 60:e14329. [PMID: 37166096 DOI: 10.1111/psyp.14329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 04/13/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023]
Abstract
Selective attentional biases arising from one sensory modality manifest in others. The effects of visuospatial attention, important in visual object perception, are unclear in the auditory domain during audiovisual (AV) scene processing. We investigate temporal and spatial factors that underlie such transfer neurally. Auditory encoding of random tone pips in AV scenes was addressed via a temporal response function model (TRF) of participants' electroencephalogram (N = 30). The spatially uninformative pips were associated with spatially distributed visual contrast reversals ("flips"), through asynchronous probabilistic AV temporal onset distributions. Participants deployed visuospatial selection on these AV stimuli to perform a task. A late (~300 ms) cross-modal influence over the neural representation of pips was found in the original and a replication study (N = 21). Transfer depended on selected visual input being (i) presented during or shortly after a related sound, in relatively limited temporal distributions (<165 ms); (ii) positioned across limited (1:4) visual foreground to background ratios. Neural encoding of auditory input, as a function of visual input, was largest at visual foreground quadrant sectors and lowest at locations opposite to the target. The results indicate that ongoing neural representations of sounds incorporate visuospatial attributes for auditory stream segregation, as cross-modal transfer conveys information that specifies the identity of multisensory signals. A potential mechanism is by enhancing or recalibrating the tuning properties of the auditory populations that represent them as objects. The results account for the dynamic evolution under visual attention of multisensory integration, specifying critical latencies at which relevant cortical networks operate.
Collapse
Affiliation(s)
- Francisco Cervantes Constantino
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Investigaciones Biológicas "Clemente Estable", Montevideo, Uruguay
| | - Thaiz Sánchez-Costa
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Germán A Cipriani
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Alejandra Carboni
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| |
Collapse
|
5
|
Shim L, Lee J, Han JH, Jeon H, Hong SK, Lee HJ. Feasibility of Virtual Reality-Based Auditory Localization Training With Binaurally Recorded Auditory Stimuli for Patients With Single-Sided Deafness. Clin Exp Otorhinolaryngol 2023; 16:217-224. [PMID: 37080730 PMCID: PMC10471910 DOI: 10.21053/ceo.2023.00206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/08/2023] [Accepted: 04/15/2023] [Indexed: 04/22/2023] Open
Abstract
OBJECTIVES To train participants to localize sound using virtual reality (VR) technology, appropriate auditory stimuli that contain accurate spatial cues are essential. The generic head-related transfer function that grounds the programmed spatial audio in VR does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with single-sided deafness (SSD). As binaural difference cues are unavailable, auditory spatial perception is a typical problem in the SSD population and warrants intervention. This study assessed the applicability of binaurally recorded auditory stimuli in VR-based training for sound localization in SSD patients. METHODS Sixteen subjects with SSD and 38 normal-hearing (NH) controls underwent VR-based training for sound localization and were assessed 3 weeks after completing training. The VR program incorporated prerecorded auditory stimuli created individually in the SSD group and over an anthropometric model in the NH group. RESULTS Sound localization performance revealed significant improvements in both groups after training, with retained benefits lasting for an additional 3 weeks. Subjective improvements in spatial hearing were confirmed in the SSD group. CONCLUSION By examining individuals with SSD and NH, VR-based training for sound localization that used binaurally recorded stimuli, measured individually, was found to be effective and beneficial. Furthermore, VR-based training does not require sophisticated instruments or setups. These. RESULTS suggest that this technique represents a new therapeutic treatment for impaired sound localization.
Collapse
Affiliation(s)
- Leeseul Shim
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Jihyun Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Hanjae Jeon
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
| | - Sung-Kwang Hong
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
6
|
Buck AN, Buchholz S, Schnupp JW, Rosskothen-Kuhl N. Interaural time difference sensitivity under binaural cochlear implant stimulation persists at high pulse rates up to 900 pps. Sci Rep 2023; 13:3785. [PMID: 36882473 PMCID: PMC9992369 DOI: 10.1038/s41598-023-30569-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 02/27/2023] [Indexed: 03/09/2023] Open
Abstract
Spatial hearing remains one of the major challenges for bilateral cochlear implant (biCI) users, and early deaf patients in particular are often completely insensitive to interaural time differences (ITDs) delivered through biCIs. One popular hypothesis is that this may be due to a lack of early binaural experience. However, we have recently shown that neonatally deafened rats fitted with biCIs in adulthood quickly learn to discriminate ITDs as well as their normal hearing litter mates, and perform an order of magnitude better than human biCI users. Our unique behaving biCI rat model allows us to investigate other possible limiting factors of prosthetic binaural hearing, such as the effect of stimulus pulse rate and envelope shape. Previous work has indicated that ITD sensitivity may decline substantially at the high pulse rates often used in clinical practice. We therefore measured behavioral ITD thresholds in neonatally deafened, adult implanted biCI rats to pulse trains of 50, 300, 900 and 1800 pulses per second (pps), with either rectangular or Hanning window envelopes. Our rats exhibited very high sensitivity to ITDs at pulse rates up to 900 pps for both envelope shapes, similar to those in common clinical use. However, ITD sensitivity declined to near zero at 1800 pps, for both Hanning and rectangular windowed pulse trains. Current clinical cochlear implant (CI) processors are often set to pulse rates ≥ 900 pps, but ITD sensitivity in human CI listeners has been reported to decline sharply above ~ 300 pps. Our results suggest that the relatively poor ITD sensitivity seen at > 300 pps in human CI users may not reflect the hard upper limit of biCI ITD performance in the mammalian auditory pathway. Perhaps with training or better CI strategies good binaural hearing may be achievable at pulse rates high enough to allow good sampling of speech envelopes while delivering usable ITDs.
Collapse
Affiliation(s)
- Alexa N Buck
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong SAR, China.,City University of Hong Kong Shenzhen Research Institute, Shenzhen, China.,Plasticity of Central Auditory Circuits, Institut de l'Audition, Institut Pasteur, Paris, France
| | - Sarah Buchholz
- Neurobiological Research Laboratory, Section of Clinical and Experimental Otology, Department of Oto-Rhino-Laryngology, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Killianst. 5, 79106, Freiburg im Breisgau, Germany
| | - Jan W Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong SAR, China.,City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Nicole Rosskothen-Kuhl
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong SAR, China. .,Neurobiological Research Laboratory, Section of Clinical and Experimental Otology, Department of Oto-Rhino-Laryngology, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Killianst. 5, 79106, Freiburg im Breisgau, Germany. .,Bernstein Center Freiburg and Faculty of Biology, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
7
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
8
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
9
|
Klingel M, Laback B. Binaural-cue Weighting and Training-Induced Reweighting Across Frequencies. Trends Hear 2022; 26:23312165221104872. [PMID: 35791626 PMCID: PMC9272187 DOI: 10.1177/23312165221104872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
During sound lateralization, the information provided by interaural differences in time (ITD) and level (ILD) is weighted, with ITDs and ILDs dominating for low and high frequencies, respectively. For mid frequencies, the weighting between these binaural cues can be changed via training. The present study investigated whether binaural-cue weights change gradually with increasing frequency region, whether they can be changed in various frequency regions, and whether such binaural-cue reweighting generalizes to untrained frequencies. In two experiments, a total of 39 participants lateralized 500-ms, 1/3-octave-wide noise bursts containing various ITD/ILD combinations in a virtual audio-visual environment. Binaural-cue weights were measured before and after a 2-session training in which, depending on the group, either ITDs or ILDs were visually reinforced. In experiment 1, four frequency bands (centered at 1000, 1587, 2520, and 4000 Hz) and a multiband stimulus comprising all four bands were presented during weight measurements. During training, only the 1000-, 2520-, and 4000-Hz bands were presented. In experiment 2, the weight measurements only included the two mid-frequency bands, while the training only included the 1587-Hz band. ILD weights increased gradually from low- to high-frequency bands. When ILDs were reinforced during training, they increased for the 4000- (experiment 1) and 2520-Hz band (experiment 2). When ITDs were reinforced, ITD weights increased only for the 1587-Hz band (at specific azimuths). This suggests that ILD reweighting requires high, and ITD reweighting requires low frequencies without including frequency regions providing fine-structure ITD cues. The changes in binaural-cue weights were independent of the trained bands, suggesting some generalization of binaural-cue reweighting.
Collapse
Affiliation(s)
- Maike Klingel
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, 27258University of Vienna, Wien, Austria.,Acoustics Research Institute, Austrian Academy of Sciences, Wien, Austria
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, Wien, Austria
| |
Collapse
|
10
|
Hanenberg C, Schlüter MC, Getzmann S, Lewald J. Short-Term Audiovisual Spatial Training Enhances Electrophysiological Correlates of Auditory Selective Spatial Attention. Front Neurosci 2021; 15:645702. [PMID: 34276281 PMCID: PMC8280319 DOI: 10.3389/fnins.2021.645702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Audiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker ("cocktail-party") scenario. Forty-five healthy participants were tested, including younger (19-29 years; n = 21) and older (66-76 years; n = 24) age groups. Three conditions of short-term training (duration 15 min) were compared, requiring localization of non-speech targets under "cocktail-party" conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, participants were tested in an auditory spatial attention task (15 min), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, participants. Also, at the time of the N2, distributed source analysis revealed an enhancement of neural activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under "cocktail-party" conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.
Collapse
Affiliation(s)
| | | | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
11
|
Coudert A, Gaveau V, Gatel J, Verdelet G, Salemme R, Farne A, Pavani F, Truy E. Spatial Hearing Difficulties in Reaching Space in Bilateral Cochlear Implant Children Improve With Head Movements. Ear Hear 2021; 43:192-205. [PMID: 34225320 PMCID: PMC8694251 DOI: 10.1097/aud.0000000000001090] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Supplemental Digital Content is available in the text. The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities.
Collapse
Affiliation(s)
- Aurélie Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, Lyon, France Department of Pediatric Otolaryngology-Head & Neck Surgery, Femme Mere Enfant Hospital, Hospices Civils de Lyon, Lyon, France Department of Otolaryngology-Head & Neck Surgery, Edouard Herriot Hospital, Hospices Civils de Lyon, Lyon, France University of Lyon 1, Lyon, France Hospices Civils de Lyon, Neuro-immersion Platform, Lyon, France Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Rosskothen-Kuhl N, Buck AN, Li K, Schnupp JW. Microsecond interaural time difference discrimination restored by cochlear implants after neonatal deafness. eLife 2021; 10:59300. [PMID: 33427644 PMCID: PMC7815311 DOI: 10.7554/elife.59300] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 01/07/2021] [Indexed: 01/03/2023] Open
Abstract
Spatial hearing in cochlear implant (CI) patients remains a major challenge, with many early deaf users reported to have no measurable sensitivity to interaural time differences (ITDs). Deprivation of binaural experience during an early critical period is often hypothesized to be the cause of this shortcoming. However, we show that neonatally deafened (ND) rats provided with precisely synchronized CI stimulation in adulthood can be trained to lateralize ITDs with essentially normal behavioral thresholds near 50 μs. Furthermore, comparable ND rats show high physiological sensitivity to ITDs immediately after binaural implantation in adulthood. Our result that ND-CI rats achieved very good behavioral ITD thresholds, while prelingually deaf human CI patients often fail to develop a useful sensitivity to ITD raises urgent questions concerning the possibility that shortcomings in technology or treatment, rather than missing input during early development, may be behind the usually poor binaural outcomes for current CI patients.
Collapse
Affiliation(s)
- Nicole Rosskothen-Kuhl
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China.,Neurobiological Research Laboratory, Section for Clinical and Experimental Otology, University Medical Center Freiburg, Freiburg, Germany
| | - Alexa N Buck
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Kongyan Li
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Jan Wh Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China.,CityU Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
13
|
Ting TM, Ahmad NS, Goh P, Mohamad-Saleh J. Binaural Modelling and Spatial Auditory Cue Analysis of 3D-Printed Ears. SENSORS 2021; 21:s21010227. [PMID: 33401407 PMCID: PMC7795785 DOI: 10.3390/s21010227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 11/07/2020] [Accepted: 11/11/2020] [Indexed: 11/16/2022]
Abstract
In this work, a binaural model resembling the human auditory system was built using a pair of three-dimensional (3D)-printed ears to localize a sound source in both vertical and horizontal directions. An analysis on the proposed model was firstly conducted to study the correlations between the spatial auditory cues and the 3D polar coordinate of the source. Apart from the estimation techniques via interaural and spectral cues, the property from the combined direct and reverberant energy decay curve is also introduced as part of the localization strategy. The preliminary analysis reveals that the latter provides a much more accurate distance estimation when compared to approximations via sound pressure level approach, but is alone not sufficient to disambiguate the front-rear confusions. For vertical localization, it is also shown that the elevation angle can be robustly encoded through the spectral notches. By analysing the strengths and shortcomings of each estimation method, a new algorithm is formulated to localize the sound source which is also further improved by cross-correlating the interaural and spectral cues. The proposed technique has been validated via a series of experiments where the sound source was randomly placed at 30 different locations in an outdoor environment up to a distance of 19 m. Based on the experimental and numerical evaluations, the localization performance has been significantly improved with an average error of 0.5 m from the distance estimation and a considerable reduction of total ambiguous points to 3.3%.
Collapse
Affiliation(s)
- Te Meng Ting
- School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Nibong Tebal 14300, Penang, Malaysia; (T.M.T.); (P.G.); (J.M.-S.)
- Flextronics Systems Sdn. Bhd., Batu Kawan Industrial Park PMT 719 Lingkaran Cassia Selatan, Simpang Ampat 14110, Penang, Malaysia
| | - Nur Syazreen Ahmad
- School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Nibong Tebal 14300, Penang, Malaysia; (T.M.T.); (P.G.); (J.M.-S.)
- Correspondence: ; Tel.: +60-45996014
| | - Patrick Goh
- School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Nibong Tebal 14300, Penang, Malaysia; (T.M.T.); (P.G.); (J.M.-S.)
| | - Junita Mohamad-Saleh
- School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Nibong Tebal 14300, Penang, Malaysia; (T.M.T.); (P.G.); (J.M.-S.)
| |
Collapse
|
14
|
Abstract
INTRODUCTION Cochlear implants (CIs) are biomedical devices that restore sound perception for people with severe-to-profound sensorineural hearing loss. Most postlingually deafened CI users are able to achieve excellent speech recognition in quiet environments. However, current CI sound processors remain limited in their ability to deliver fine spectrotemporal information, making it difficult for CI users to perceive complex sounds. Limited access to complex acoustic cues such as music, environmental sounds, lexical tones, and voice emotion may have significant ramifications on quality of life, social development, and community interactions. AREAS COVERED The purpose of this review article is to summarize the literature on CIs and music perception, with an emphasis on music training in pediatric CI recipients. The findings have implications on our understanding of noninvasive, accessible methods for improving auditory processing and may help advance our ability to improve sound quality and performance for implantees. EXPERT OPINION Music training, particularly in the pediatric population, may be able to continue to enhance auditory processing even after performance plateaus. The effects of these training programs appear generalizable to non-trained musical tasks, speech prosody and, emotion perception. Future studies should employ rigorous control groups involving a non-musical acoustic intervention, standardized auditory stimuli, and the provision of feedback.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| |
Collapse
|