1
|
de Graaff F, Huysmans E, Merkus P, Goverts ST, Kramer SE, Smits C. Manual switching between programs intended for specific real-life listening environments by adult cochlear implant users: do they use the intended program? Int J Audiol 2024:1-8. [PMID: 38445654 DOI: 10.1080/14992027.2024.2321153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/13/2024] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The aim of the current study was to investigate the use of manually and automatically switching programs in everyday day life by adult cochlear implant (CI) users. DESIGN Participants were fitted with an automatically switching sound processor setting and 2 manual programs for 3-week study periods. They received an extensive counselling session. Datalog information was used to analyse the listening environments identified by the sound processor, the program used and the number of program switches. STUDY SAMPLES Fifteen adult Cochlear CI users. Average age 69 years (range: 57-85 years). RESULTS Speech recognition in noise was significantly better with the "noise" program than with the "quiet" program. On average, participants correctly classified 4 out of 5 listening environments in a laboratory setting. Participants switched, on average, less than once a day between the 2 manual programs and the sound processor was in the intended program 60% of the time. CONCLUSION Adult CI users switch rarely between two manual programs and leave the sound processor often in a program not intended for the specific listening environment. A program that switches automatically between settings, therefore, seems to be a more appropriate option to optimise speech recognition performance in daily listening environments.
Collapse
Affiliation(s)
- Feike de Graaff
- Amsterdam UMC Location Vrije Universiteit, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Elke Huysmans
- Amsterdam UMC Location Vrije Universiteit, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Paul Merkus
- Amsterdam UMC Location Vrije Universiteit, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - S Theo Goverts
- Amsterdam UMC Location Vrije Universiteit, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC Location Vrije Universiteit, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Cas Smits
- Amsterdam UMC Location University of Amsterdam, Department of Otolaryngology - Head and Neck Surgery, Section Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.08.574475. [PMID: 38260458 PMCID: PMC10802496 DOI: 10.1101/2024.01.08.574475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
How we move our bodies affects how we perceive sound. For instance, we can explore an environment to seek out the source of a sound and we can use head movements to compensate for hearing loss. How we do this is not well understood because many auditory experiments are designed to limit head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. We then asked how auditory behavior was affected by hearing loss induced by surgical removal of the malleus from the middle ear. An innate behavior, the auditory startle response, was abolished by bilateral hearing loss and unaffected by unilateral hearing loss. Similarly, performance on the sound-seeking task drastically declined after bilateral hearing loss and did not recover. In striking contrast, mice with unilateral hearing loss were only transiently impaired on sound-seeking; over a recovery period of about a week, they regained high levels of performance, increasingly reliant on a different spatial sampling strategy. Thus, even in the face of permanent unilateral damage to the peripheral auditory system, mice recover their ability to perform a naturalistic sound-seeking task. This paradigm provides an opportunity to examine how body movement enables better hearing and resilient adaptation to sensory deprivation.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta GA 30322
| | | | | | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
| | - Kaitlyn A Brooks
- Department of Otolaryngology - Head and Neck Surgery, Emory University School of Medicine, Atlanta GA 30308
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta GA 30322
- Department of Biology, Emory College of Arts and Sciences, Atlanta GA 30322
| |
Collapse
|
3
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
4
|
Hládek Ľ, Seeber BU. Speech Intelligibility in Reverberation is Reduced During Self-Rotation. Trends Hear 2023; 27:23312165231188619. [PMID: 37475460 PMCID: PMC10363862 DOI: 10.1177/23312165231188619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 06/23/2023] [Accepted: 07/02/2023] [Indexed: 07/22/2023] Open
Abstract
Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.
Collapse
Affiliation(s)
- Ľuboš Hládek
- Audio Information Processing, Technical University of Munich, Munich, Germany
| | - Bernhard U. Seeber
- Audio Information Processing, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Huang H, Ricketts TA, Hornsby BWY, Picou EM. Effects of Critical Distance and Reverberation on Listening Effort in Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4837-4851. [PMID: 36351258 DOI: 10.1044/2022_jslhr-22-00109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Mixed historical data on how listening effort is affected by reverberation and listener-to-speaker distance challenge existing models of listening effort. This study investigated the effects of reverberation and listener-to-speaker distance on behavioral and subjective measures of listening effort: (a) when listening at a fixed signal-to-noise ratio (SNR) and (b) at SNRs that were manipulated so that word recognition would be comparable across different reverberation times and listening distances. It was expected that increased reverberation would increase listening effort but only when listening outside critical distance. METHOD Nineteen adults (21-40 years) with no hearing loss completed a dual-task paradigm. The primary task was word recognition and the secondary task was timed word categorization; response times indexed behavioral listening effort. Additionally, participants provided subjective ratings in each condition. Testing was completed at two reverberation levels (moderate and high, RT30 = 469 and 1,223 ms, respectively) and at two listener-to-speaker distances (inside and outside critical distance for the test room, 1.25 and 4 m, respectively). RESULTS Increased reverberation and listening distances worsened word recognition performance and both behavioral and subjective listening effort. The effect of reverberation was exacerbated when listeners were outside critical distance. Subjective experience of listening effort persisted even when word recognition was comparable across conditions. CONCLUSIONS Longer reverberation times or listening outside the room's critical distance negatively affected behavioral and subjective listening effort. This study extends understanding of listening effort in reverberant rooms by highlighting the effect of listener's position relative to the room's critical distance.
Collapse
Affiliation(s)
- Haiping Huang
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Todd A Ricketts
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
6
|
Hendrikse MME, Eichler T, Hohmann V, Grimm G. Self-motion with Hearing Impairment and (Directional) Hearing Aids. Trends Hear 2022; 26:23312165221078707. [PMID: 35341403 PMCID: PMC8966140 DOI: 10.1177/23312165221078707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When listening to a sound source in everyday situations, typical movement behavior is highly individual and may not result in the listener directly facing the sound source. Behavioral differences can affect the performance of directional algorithms in hearing aids, as was shown in previous work by using head movement trajectories of normal-hearing (NH) listeners in acoustic simulations for noise-suppression performance predictions. However, the movement behavior of hearing-impaired (HI) listeners with or without hearing aids may differ, and hearing-aid users might adapt their self-motion to improve the performance of directional algorithms. This work investigates the influence of hearing impairment on self-motion, and the interaction of hearing aids with self-motion. In order to do this, the self-motion of three HI participant groups—aided with an adaptive differential microphone (ADM), aided without ADM, and unaided—was measured and compared to previously measured self-motion data from younger and older NH participants. Self-motion was measured in virtual audiovisual environments (VEs) in the laboratory, and the signal-to-noise ratios (SNRs) and SNR improvement of the ADM resulting from the head movements of the participants were estimated using acoustic simulations. HI participants did almost all of the movement with their head and less with their eyes compared to NH participants, which led to a 0.3 dB increase in estimated SNR and to differences in estimated SNR improvement of the ADM. However, the self-motion of the HI participants aided with ADM was similar to that of other HI participants, indicating that the ADM did not cause listeners to adapt their self-motion.
Collapse
Affiliation(s)
- Maartje M E Hendrikse
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Theda Eichler
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Volker Hohmann
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Giso Grimm
- Auditory Signal Processing and Cluster of Excellence "Hearing4all", Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
7
|
Lu H, McKinney MF, Zhang T, Oxenham AJ. Investigating age, hearing loss, and background noise effects on speaker-targeted head and eye movements in three-way conversations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1889. [PMID: 33765809 DOI: 10.1121/10.0003707] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 02/19/2021] [Indexed: 06/12/2023]
Abstract
Although beamforming algorithms for hearing aids can enhance performance, the wearer's head may not always face the target talker, potentially limiting real-world benefits. This study aimed to determine the extent to which eye tracking improves the accuracy of locating the current talker in three-way conversations and to test the hypothesis that eye movements become more likely to track the target talker with increasing background noise levels, particularly in older and/or hearing-impaired listeners. Conversations between a participant and two confederates were held around a small table in quiet and with background noise levels of 50, 60, and 70 dB sound pressure level, while the participant's eye and head movements were recorded. Ten young normal-hearing listeners were tested, along with ten older normal-hearing listeners and eight hearing-impaired listeners. Head movements generally undershot the talker's position by 10°-15°, but head and eye movements together predicted the talker's position well. Contrary to our original hypothesis, no major differences in listening behavior were observed between the groups or between noise levels, although the hearing-impaired listeners tended to spend less time looking at the current talker than the other groups, especially at the highest noise level.
Collapse
Affiliation(s)
- Hao Lu
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Martin F McKinney
- Starkey Hearing Technologies, 6700 Washington Avenue South, Eden Prairie, Minnesota 55344, USA
| | - Tao Zhang
- Starkey Hearing Technologies, 6700 Washington Avenue South, Eden Prairie, Minnesota 55344, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
8
|
Conversation in small groups: Speaking and listening strategies depend on the complexities of the environment and group. Psychon Bull Rev 2020; 28:632-640. [PMID: 33051825 PMCID: PMC8062389 DOI: 10.3758/s13423-020-01821-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2020] [Indexed: 11/29/2022]
Abstract
Many conversations in our day-to-day lives are held in noisy environments – impeding comprehension, and in groups – taxing auditory attention-switching processes. These situations are particularly challenging for older adults in cognitive and sensory decline. In noisy environments, a variety of extra-linguistic strategies are available to speakers and listeners to facilitate communication, but while models of language account for the impact of context on word choice, there has been little consideration of the impact of context on extra-linguistic behaviour. To address this issue, we investigate how the complexity of the acoustic environment and interaction situation impacts extra-linguistic conversation behaviour of older adults during face-to-face conversations. Specifically, we test whether the use of intelligibility-optimising strategies increases with complexity of the background noise (from quiet to loud, and in speech-shaped vs. babble noise), and with complexity of the conversing group (dyad vs. triad). While some communication strategies are enhanced in more complex background noise, with listeners orienting to talkers more optimally and moving closer to their partner in babble than speech-shaped noise, this is not the case with all strategies, as we find greater vocal level increases in the less complex speech-shaped noise condition. Other behaviours are enhanced in the more complex interaction situation, with listeners using more optimal head orientations, and taking longer turns when gaining the floor in triads compared to dyads. This study elucidates how different features of the conversation context impact individuals’ communication strategies, which is necessary to both develop a comprehensive cognitive model of multimodal conversation behaviour, and effectively support individuals that struggle conversing.
Collapse
|
9
|
Moore DR, Whiston H, Lough M, Marsden A, Dillon H, Munro KJ, Stone MA. FreeHear: A New Sound-Field Speech-in-Babble Hearing Assessment Tool. Trends Hear 2020; 23:2331216519872378. [PMID: 31599206 PMCID: PMC6787881 DOI: 10.1177/2331216519872378] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Pure-tone threshold audiometry is currently the standard test of hearing.
However, in everyday life, we are more concerned with listening to speech of
moderate loudness and, specifically, listening to a particular talker against a
background of other talkers. FreeHear delivers strings of three spoken digits
(0–9, not 7) against a background babble via three loudspeakers placed in front
and to either side of a listener. FreeHear is designed as a rapid, quantitative
initial assessment of hearing using an adaptive algorithm. It is designed
especially for children and for testing listeners who are using hearing devices.
In this first report on FreeHear, we present developmental considerations and
protocols and results of testing 100 children (4–13 years old) and 23 adults
(18–30 years old). Two of the six 4 year olds and 91% of all older children
completed full testing. Speech reception threshold (SRT) for digits and noise
colocated at 0° or separated by 90° both improved linearly across 4 to 12 years
old by 6 to 7 dB, with a further 2 dB improvement for the adults. These data
suggested full maturation at approximately 15 years old SRTs at 90° digits/noise
separation were better by approximately 6 dB than SRTs colocated at 0°. This
spatial release from masking did not change significantly across age.
Test–retest reliability was similar for children and adults (standard deviation
of 2.05–2.91 dB SRT), with a mean practice improvement of 0.04–0.98 dB. FreeHear
shows promise as a clinical test for both children and adults. Further trials in
people with hearing impairment are ongoing.
Collapse
Affiliation(s)
- David R Moore
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH, USA.,Department of Otolaryngology, University of Cincinnati College of Medicine, OH, USA
| | - Helen Whiston
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Melanie Lough
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Antonia Marsden
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Centre for Biostatistics, School of Health Sciences, The University of Manchester, UK
| | - Harvey Dillon
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Australian Hearing Hub, Macquarie University, Macquarie Park, Australia
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| | - Michael A Stone
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, UK
| |
Collapse
|
10
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
11
|
Hadley LV, Brimijoin WO, Whitmer WM. Speech, movement, and gaze behaviours during dyadic conversation in noise. Sci Rep 2019; 9:10451. [PMID: 31320658 PMCID: PMC6639257 DOI: 10.1038/s41598-019-46416-0] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 06/20/2019] [Indexed: 11/09/2022] Open
Abstract
How do people have conversations in noise and make themselves understood? While many previous studies have investigated speaking and listening in isolation, this study focuses on the behaviour of pairs of individuals in an ecologically valid context. Specifically, we report the fine-grained dynamics of natural conversation between interlocutors of varying hearing ability (n = 30), addressing how different levels of background noise affect speech, movement, and gaze behaviours. We found that as noise increased, people spoke louder and moved closer together, although these behaviours provided relatively small acoustic benefit (0.32 dB speech level increase per 1 dB noise increase). We also found that increased noise led to shorter utterances and increased gaze to the speaker's mouth. Surprisingly, interlocutors did not make use of potentially beneficial head orientations. While participants were able to sustain conversation in noise of up to 72 dB, changes in conversation structure suggested increased difficulty at 78 dB, with a significant decrease in turn-taking success. Understanding these natural conversation behaviours could inform broader models of interpersonal communication, and be applied to the development of new communication technologies. Furthermore, comparing these findings with those from isolation paradigms demonstrates the importance of investigating social processes in ecologically valid multi-person situations.
Collapse
Affiliation(s)
- Lauren V Hadley
- Hearing Sciences - Scottish Section, Division of Clinical Neuroscience, University of Nottingham, Glasgow, UK.
| | - W Owen Brimijoin
- Hearing Sciences - Scottish Section, Division of Clinical Neuroscience, University of Nottingham, Glasgow, UK
| | - William M Whitmer
- Hearing Sciences - Scottish Section, Division of Clinical Neuroscience, University of Nottingham, Glasgow, UK
| |
Collapse
|