1
|
Low-dimensional interference of mid-level sound statistics predicts human speech recognition in natural environmental noise. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.579526. [PMID: 38405870 PMCID: PMC10888804 DOI: 10.1101/2024.02.13.579526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Recognizing speech in noise, such as in a busy street or restaurant, is an essential listening task where the task difficulty varies across acoustic environments and noise levels. Yet, current cognitive models are unable to account for changing real-world hearing sensitivity. Here, using natural and perturbed background sounds we demonstrate that spectrum and modulations statistics of environmental backgrounds drastically impact human word recognition accuracy and they do so independently of the noise level. These sound statistics can facilitate or hinder recognition - at the same noise level accuracy can range from 0% to 100%, depending on the background. To explain this perceptual variability, we optimized a biologically grounded hierarchical model, consisting of frequency-tuned cochlear filters and subsequent mid-level modulation-tuned filters that account for central auditory tuning. Low-dimensional summary statistics from the mid-level model accurately predict single trial perceptual judgments, accounting for more than 90% of the perceptual variance across backgrounds and noise levels, and substantially outperforming a cochlear model. Furthermore, perceptual transfer functions in the mid-level auditory space identify multi-dimensional natural sound features that impact recognition. Thus speech recognition in natural backgrounds involves interference of multiple summary statistics that are well described by an interpretable, low-dimensional auditory model. Since this framework relates salient natural sound cues to single trial perceptual judgements, it may improve outcomes for auditory prosthetics and clinical measurements of real-world hearing sensitivity.
Collapse
|
2
|
A Transparent Mask and Clear Speech Benefit Speech Intelligibility in Individuals With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4558-4574. [PMID: 37788660 DOI: 10.1044/2023_jslhr-22-00636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
PURPOSE The purpose of the study is to investigate the impacts of a surgical mask and a transparent mask on audio-only and audiovisual speech intelligibility in noise (i.e., 0 dB signal-to-noise ratio) in individuals with mild-to-profound hearing loss. The study also examined if individuals with hearing loss can benefit from using a transparent mask and clear speech for speech understanding in noise. METHOD Thirty-one individuals with hearing loss (from 22 to 74 years old) completed keyword identification tasks to measure face-masked speech intelligibility in noise. A mixed-effects logistic regression model was used to examine the effects of face masks (no mask, transparent mask, surgical mask), presentation modes (audio only, audiovisual), speaking styles (conversational, clear), noise type (speech-shaped noise [SSN], four-talker babble [4-T babble]), hearing groups (mild hearing loss [MHL], greater than MHL: GHL), and their interactions on binary accuracy of keyword identification. RESULTS In the audio-only mode, the GHL group showed reduced speech intelligibility regardless of other factors, whereas the MHL group showed decreased speech intelligibility for the transparent mask more than for the surgical mask. The use of a transparent mask was advantageous for both hearing loss groups. Clear speech remediated the detrimental effects of face masks on speech intelligibility in noise. Both groups tended to perform better in SSN versus 4-T babble. CONCLUSIONS The findings indicate that, when using face masks, either a transparent mask or a surgical mask negatively affects speech understanding in noise for individuals with hearing loss. Using a transparent mask and clear speech could be a potential solution to improve speech intelligibility in communication with face masks in noise.
Collapse
|
3
|
Individual differences in speech-on-speech masking are correlated with cognitive and visual task performance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2137-2153. [PMID: 37800988 PMCID: PMC10631817 DOI: 10.1121/10.0021301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/19/2023] [Accepted: 09/17/2023] [Indexed: 10/07/2023]
Abstract
Individual differences in spatial tuning for masked target speech identification were determined using maskers that varied in type and proximity to the target source. The maskers were chosen to produce three strengths of informational masking (IM): high [same-gender, speech-on-speech (SOS) masking], intermediate (the same masker speech time-reversed), and low (speech-shaped, speech-envelope-modulated noise). Typical for this task, individual differences increased as IM increased, while overall performance decreased. To determine the extent to which auditory performance might generalize to another sensory modality, a comparison visual task was also implemented. Visual search time was measured for identifying a cued object among "clouds" of distractors that were varied symmetrically in proximity to the target. The visual maskers also were chosen to produce three strengths of an analog of IM based on feature similarities between the target and maskers. Significant correlations were found for overall auditory and visual task performance, and both of these measures were correlated with an index of general cognitive reasoning. Overall, the findings provide qualified support for the proposition that the ability of an individual to solve IM-dominated tasks depends on cognitive mechanisms that operate in common across sensory modalities.
Collapse
|
4
|
Predicting speech-in-speech recognition: Short-term audibility and spatial separation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1827-1837. [PMID: 37728286 DOI: 10.1121/10.0021069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 08/28/2023] [Indexed: 09/21/2023]
Abstract
Quantifying the factors that predict variability in speech-in-speech recognition represents a fundamental challenge in auditory science. Stimulus factors associated with energetic and informational masking (IM) modulate variability in speech-in-speech recognition, but energetic effects can be difficult to estimate in spectro-temporally dynamic speech maskers. The current experiment characterized the effects of short-term audibility and differences in target and masker location (or perceived location) on the horizontal plane for sentence recognition in two-talker speech. Thirty young adults with normal hearing (NH) participated. Speech reception thresholds and keyword recognition at a fixed signal-to-noise ratio (SNR) were measured in each spatial condition. Short-term audibility for each keyword was quantified using a glimpsing model. Results revealed that speech-in-speech recognition depended on the proportion of audible glimpses available in the target + masker keyword stimulus in each spatial condition, even across stimuli presented at a fixed global SNR. Short-term audibility requirements were greater for colocated than spatially separated speech-in-speech recognition, and keyword recognition improved more rapidly as a function of increases in target audibility with spatial separation. Results indicate that spatial cues enhance glimpsing efficiency in competing speech for young adults with NH and provide a quantitative framework for estimating IM for speech-in-speech recognition in different spatial configurations.
Collapse
|
5
|
Energetic and informational masking place dissociable demands on listening effort: Evidence from simultaneous electroencephalography and pupillometrya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1152-1167. [PMID: 37610284 PMCID: PMC10449482 DOI: 10.1121/10.0020539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/24/2023]
Abstract
The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins.
Collapse
|
6
|
Spatio-temporal Integration of Speech Reflections in Hearing-Impaired Listeners. Trends Hear 2022; 26:23312165221143901. [PMID: 36537084 PMCID: PMC9772954 DOI: 10.1177/23312165221143901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Speech recognition in rooms requires the temporal integration of reflections which arrive with a certain delay after the direct sound. It is commonly assumed that there is a certain temporal window of about 50-100 ms, during which reflections can be integrated with the direct sound, while later reflections are detrimental to speech intelligibility. This concept was challenged in a recent study by employing binaural room impulse responses (RIRs) with systematically varied interaural phase differences (IPDs) and amplitude of the direct sound and a variable number of reflections delayed by up to 200 ms. When amplitude or IPD favored late RIR components, normal-hearing (NH) listeners appeared to be capable of focusing on these components rather than on the precedent direct sound, which contrasted with the common concept of considering early RIR components as useful and late components as detrimental. The present study investigated speech intelligibility in the same conditions in hearing-impaired (HI) listeners. The data indicate that HI listeners were generally less able to "ignore" the direct sound than NH listeners, when the most useful information was confined to late RIR components. Some HI listeners showed a remarkable inability to integrate across multiple reflections and to optimally "shift" their temporal integration window, which was quite dissimilar to NH listeners. This effect was most pronounced in conditions requiring spatial and temporal integration and could provide new challenges for individual prediction models of binaural speech intelligibility.
Collapse
|
7
|
Predicting speech-in-speech recognition: Short-term audibility, talker sex, and listener factors. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3010. [PMID: 36456289 DOI: 10.1121/10.0015228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 11/01/2022] [Indexed: 06/17/2023]
Abstract
Speech-in-speech recognition can be challenging, and listeners vary considerably in their ability to accomplish this complex auditory-cognitive task. Variability in performance can be related to intrinsic listener factors as well as stimulus factors associated with energetic and informational masking. The current experiments characterized the effects of short-term audibility of the target, differences in target and masker talker sex, and intrinsic listener variables on sentence recognition in two-talker speech and speech-shaped noise. Participants were young adults with normal hearing. Each condition included the adaptive measurement of speech reception thresholds, followed by testing at a fixed signal-to-noise ratio (SNR). Short-term audibility for each keyword was quantified using a computational glimpsing model for target+masker mixtures. Scores on a psychophysical task of auditory stream segregation predicted speech recognition, with stronger effects for speech-in-speech than speech-in-noise. Both speech-in-speech and speech-in-noise recognition depended on the proportion of audible glimpses available in the target+masker mixture, even across stimuli presented at the same global SNR. Short-term audibility requirements varied systematically across stimuli, providing an estimate of the greater informational masking for speech-in-speech than speech-in-noise recognition and quantifying informational masking for matched and mismatched talker sex.
Collapse
|
8
|
Effect of Masker Head Orientation, Listener Age, and Extended High-Frequency Sensitivity on Speech Recognition in Spatially Separated Speech. Ear Hear 2022; 43:90-100. [PMID: 34260434 PMCID: PMC8712343 DOI: 10.1097/aud.0000000000001081] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Masked speech recognition is typically assessed as though the target and background talkers are all directly facing the listener. However, background speech in natural environments is often produced by talkers facing other directions, and talker head orientation affects the spectral content of speech, particularly at the extended high frequencies (EHFs; >8 kHz). This study investigated the effect of masker head orientation and listeners' EHF sensitivity on speech-in-speech recognition and spatial release from masking in children and adults. DESIGN Participants were 5- to 7-year-olds (n = 15) and adults (n = 34), all with normal hearing up to 8 kHz and a range of EHF hearing thresholds. Speech reception thresholds (SRTs) were measured for target sentences recorded from a microphone directly in front of the talker's mouth and presented from a loudspeaker directly in front of the listener, simulating a target directly in front of and facing the listener. The maskers were two streams of concatenated words recorded from a microphone located at either 0° or 60° azimuth, simulating masker talkers facing the listener or facing away from the listener, respectively. Maskers were presented in one of three spatial conditions: co-located with the target, symmetrically separated on either side of the target (+54° and -54° on the horizontal plane), or asymmetrically separated to the right of the target (both +54° on the horizontal plane). RESULTS Performance was poorer for the facing than for the nonfacing masker head orientation. This benefit of the nonfacing masker head orientation, or head orientation release from masking (HORM), was largest under the co-located condition, but it was also observed for the symmetric and asymmetric masker spatial separation conditions. SRTs were positively correlated with the mean 16-kHz threshold across ears in adults for the nonfacing conditions but not for the facing masker conditions. In adults with normal EHF thresholds, the HORM was comparable in magnitude to the benefit of a symmetric spatial separation of the target and maskers. Although children benefited from the nonfacing masker head orientation, their HORM was reduced compared to adults with normal EHF thresholds. Spatial release from masking was comparable across age groups for symmetric masker placement, but it was larger in adults than children for the asymmetric masker. CONCLUSIONS Masker head orientation affects speech-in-speech recognition in children and adults, particularly those with normal EHF thresholds. This is important because masker talkers do not all face the listener under most natural listening conditions, and assuming a midline orientation would tend to overestimate the effect of spatial separation. The benefits associated with EHF audibility for speech-in-speech recognition may warrant clinical evaluation of thresholds above 8 kHz.
Collapse
|
9
|
Speech intelligibility and talker gender classification with noise-vocoded and tone-vocoded speech. JASA EXPRESS LETTERS 2021; 1:094401. [PMID: 34590078 PMCID: PMC8456348 DOI: 10.1121/10.0006285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 08/21/2021] [Indexed: 05/21/2023]
Abstract
Vocoded speech provides less spectral information than natural, unprocessed speech, negatively affecting listener performance on speech intelligibility and talker gender classification tasks. In this study, young normal-hearing participants listened to noise-vocoded and tone-vocoded (i.e., sinewave-vocoded) sentences containing 1, 2, 4, 8, 16, or 32 channels, as well as non-vocoded sentences, and reported the words heard as well as the gender of the talker. Overall, performance was significantly better with tone-vocoded than noise-vocoded speech for both tasks. Within the talker gender classification task, biases in performance were observed for lower numbers of channels, especially when using the noise carrier.
Collapse
|
10
|
Influence of Three Auditory Profiles on Aided Speech Perception in Different Noise Scenarios. Trends Hear 2021; 25:23312165211023709. [PMID: 34184946 PMCID: PMC8246576 DOI: 10.1177/23312165211023709] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Hearing aid (HA) users differ greatly in their speech-in-noise (SIN) outcomes. This could be because the degree to which current HA fittings can address individual listening needs differs across users and listening situations. In two earlier studies, an auditory test battery and a data-driven method were developed for classifying HA candidates into four distinct auditory profiles differing in audiometric hearing loss and suprathreshold hearing abilities. This study explored aided SIN outcome for three of these profiles in different noise scenarios. Thirty-one older habitual HA users and six young normal-hearing listeners participated. Two SIN tasks were administered: a speech recognition task and a “just follow conversation” task requiring the participants to self-adjust the target-speech level. Three noise conditions were tested: stationary speech-shaped noise, speech-shaped babble noise, and speech-shaped babble noise with competing dialogues. Each HA user was fitted with three HAs from different manufacturers using their recommended procedures. Real-ear measurements were performed to document the final gain settings. The results showed that HA users with mild hearing deficits performed better than HA users with pronounced hearing deficits on the speech recognition task but not the just follow conversation task. Moreover, participants with pronounced hearing deficits obtained different SIN outcomes with the tested HAs, which appeared to be related to differences in HA gain. Overall, these findings imply that current proprietary fitting strategies are limited in their ability to ensure good SIN outcomes, especially for users with pronounced hearing deficits, for whom the choice of device seems most consequential.
Collapse
|
11
|
Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
|
12
|
Binaural Recordings in Natural Acoustic Environments: Estimates of Speech-Likeness and Interaural Parameters. Trends Hear 2021; 24:2331216520972858. [PMID: 33331242 PMCID: PMC7750905 DOI: 10.1177/2331216520972858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Binaural acoustic recordings were made in multiple natural environments, which were chosen to be similar to those reported to be difficult for listeners with impaired hearing. These environments include natural conversations that take place in the presence of other sound sources as found in restaurants, walking or biking in the city, and so on. Sounds from these environments were recorded binaurally with in-the-ear microphones and were analyzed with respect to speech-likeness measures and interaural difference measures. The speech-likeness measures were based on amplitude–modulation patterns within frequency bands and were estimated for 1-s time-slices. The interaural difference measures included interaural coherence, interaural time difference, and interaural level difference, which were estimated for time-slices of 20-ms duration. These binaural measures were documented for one-fourth-octave frequency bands centered at 500 Hz and for the envelopes of one-fourth-octave bands centered at 2000 Hz. For comparison purposes, the same speech-likeness and interaural difference measures were computed for a set of virtual recordings that mimic typical clinical test configurations. These virtual recordings were created by filtering anechoic waveforms with available head-related transfer functions and combining them to create multiple source combinations. Overall, the speech-likeness results show large variability within and between environments, and they demonstrate the importance of having information from both ears available. Furthermore, the interaural parameter results show that the natural recordings contain a relatively small proportion of time-slices with high coherence compared with the virtual recordings; however, when present, binaural cues might be used for selecting intervals with good speech intelligibility for individual sources.
Collapse
|
13
|
Speech perception in noise: Masking and unmasking. J Otol 2021; 16:109-119. [PMID: 33777124 PMCID: PMC7985001 DOI: 10.1016/j.joto.2020.12.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 12/03/2020] [Accepted: 12/06/2020] [Indexed: 11/23/2022] Open
Abstract
Speech perception is essential for daily communication. Background noise or concurrent talkers, on the other hand, can make it challenging for listeners to track the target speech (i.e., cocktail party problem). The present study reviews and compares existing findings on speech perception and unmasking in cocktail party listening environments in English and Mandarin Chinese. The review starts with an introduction section followed by related concepts of auditory masking. The next two sections review factors that release speech perception from masking in English and Mandarin Chinese, respectively. The last section presents an overall summary of the findings with comparisons between the two languages. Future research directions with respect to the difference in literature on the reviewed topic between the two languages are also discussed.
Collapse
|
14
|
Older Listeners' Perception of Speech With Strengthened and Weakened Dynamic Pitch Cues in Background Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:348-358. [PMID: 33439741 PMCID: PMC8632513 DOI: 10.1044/2020_jslhr-20-00116] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 07/28/2020] [Accepted: 09/21/2020] [Indexed: 06/12/2023]
Abstract
Purpose Dynamic pitch, which is defined as the variation in fundamental frequency, is an acoustic cue that aids speech perception in noise. This study examined the effects of strengthened and weakened dynamic pitch cues on older listeners' speech perception in noise, as well as how these effects were modulated by individual factors including spectral perception ability. Method The experiment measured speech reception thresholds in noise in both younger listeners with normal hearing and older listeners whose hearing status ranged from near-normal hearing to mild-to-moderate sensorineural hearing loss. The pitch contours of the target speech were manipulated to create four levels of dynamic pitch strength: weakened, original, mildly strengthened, and strengthened. Listeners' spectral perception ability was measured using tests of spectral ripple and frequency modulation discrimination. Results Both younger and older listeners performed worse with manipulated dynamic pitch cues than with original dynamic pitch. The effects of dynamic pitch on older listeners' speech recognition were associated with their age but not with their perception of spectral information. Those older listeners who were relatively younger were more negatively affected by dynamic pitch manipulations. Conclusions The findings suggest the current pitch manipulation strategy is detrimental for older listeners to perceive speech in noise, as compared to original dynamic pitch. While the influence of age on the effects of dynamic pitch is likely due to age-related declines in pitch perception, the spectral measures used in this study were not strong predictors for dynamic pitch effects. Taken together, these results indicate next steps in this line of work should be focused on how to manipulate acoustic cues in speech in order to improve speech perception in noise for older listeners.
Collapse
|
15
|
The effect of using a PORP to reconstruct the ossicular chain under otoendoscopy with and without a malleus handle. Acta Otolaryngol 2021; 141:19-22. [PMID: 33063573 DOI: 10.1080/00016489.2020.1815835] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
BACKGROUND There are many reports on the role of the malleus handle in ossicular chain reconstruction (OCR). However, the effect of the presence of the malleus handle is not clear. AIM/OBJECTIVES To compare the hearing outcomes of using a partial ossicular replacement prosthesis (PORP) to reconstruct the ossicular chain under otoendoscopy with and without a malleus handle. METHODS Records of 57 patients requiring OCR were retrospectively analyzed. They were divided into the malleus handle-present group (group 1) and the malleus handle-absent group (group 2). The audiometric results were analyzed pre- and postoperatively. A postoperative air-bone gap (ABG)≤20 dB was considered successful. RESULTS The mean improvement in air conduction hearing thresholds was 19.80 dB in group 1 and 16.70 dB in group 2. The mean ABG improvement was 18.09 ± 12.79 dB for group 1 and 17.20 ± 16.44 dB for group 2. The malleus handle-present group achieved higher success (65.63%) than the malleus handle-absent group (52%; p> .05). CONCLUSIONS AND SIGNIFICANCE Improvements in hearing outcomes were similar for the two groups. However, the malleus handle-present group showed a better reconstruction success rate. Our results suggest that if there is no lesion in the malleus handle, it is recommended to be retained.
Collapse
|
16
|
Enhancing the perceptual segregation and localization of sound sources with a triple beamformer. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3598. [PMID: 33379918 PMCID: PMC8097713 DOI: 10.1121/10.0002779] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/04/2020] [Accepted: 11/09/2020] [Indexed: 06/01/2023]
Abstract
A triple beamformer was developed to exploit the capabilities of the binaural auditory system. The goal was to enhance the perceptual segregation of spatially separated sound sources while preserving source localization. The triple beamformer comprised a variant of a standard single-channel beamformer that routes the primary beam output focused on the target source location to both ears. The triple beam algorithm adds two supplementary beams with the left-focused beam routed only to the left ear and the right-focused beam routed only to the right ear. The rationale for the approach is that the triple beam processing exploits sound source segregation in high informational masking (IM) conditions. Furthermore, the exaggerated interaural level differences produced by the triple beam are well-suited for categories of listeners (e.g., bilateral cochlear implant users) who receive limited benefit from interaural time differences. The performance with the triple beamformer was compared to normal binaural hearing (simulated using a Knowles Electronic Manikin for Auditory Research, G.R.A.S. Sound and Vibration, Holte, DK) and to that obtained from a single-channel beamformer. Source localization in azimuth and masked speech identification for multiple masker locations were measured for all three algorithms. Taking both localization and speech intelligibility into account, the triple beam algorithm was considered to be advantageous under high IM listening conditions.
Collapse
|
17
|
The effect of fundamental frequency contour similarity on multi-talker listening in older and younger adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3527. [PMID: 33379934 PMCID: PMC7863686 DOI: 10.1121/10.0002661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Older adults with hearing loss have greater difficulty recognizing target speech in multi-talker environments than young adults with normal hearing, especially when target and masker speech streams are perceptually similar. A difference in fundamental frequency (f0) contour depth is an effective stream segregation cue for young adults with normal hearing. This study examined whether older adults with varying degrees of sensorineural hearing loss are able to utilize differences in target/masker f0 contour depth to improve speech recognition in multi-talker listening. Speech recognition thresholds (SRTs) were measured for speech mixtures composed of target/masker streams with flat, normal, and exaggerated speaking styles, in which f0 contour depth systematically varied. Computational modeling estimated differences in energetic masking across listening conditions. Young adults had lower SRTs than older adults; a result that was partially explained by differences in audibility predicted by the model. However, audibility differences did not explain why young adults experienced a benefit from mismatched target/masker f0 contour depth, while in most conditions, older adults did not. Reduced ability to use segregation cues (differences in target/masker f0 contour depth), and deficits grouping speech with variable f0 contours likely contribute to difficulties experienced by older adults in challenging acoustic environments.
Collapse
|
18
|
Assessing the benefit of acoustic beamforming for listeners with aphasia using modified psychoacoustic methods. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:2894. [PMID: 33261373 PMCID: PMC8097716 DOI: 10.1121/10.0002454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 10/13/2020] [Accepted: 10/14/2020] [Indexed: 06/12/2023]
Abstract
Acoustic beamforming has been shown to improve identification of target speech in noisy listening environments for individuals with sensorineural hearing loss. This study examined whether beamforming would provide a similar benefit for individuals with aphasia (acquired neurological language impairment). The benefit of beamforming was examined for persons with aphasia (PWA) and age- and hearing-matched controls in both a speech masking condition and a speech-shaped, speech-modulated noise masking condition. Performance was measured when natural spatial cues were provided, as well as when the target speech level was enhanced via a single-channel beamformer. Because typical psychoacoustic methods may present substantial experimental confounds for PWA, clinically guided modifications of experimental procedures were determined individually for each PWA participant. Results indicated that the beamformer provided a significant overall benefit to listeners. On an individual level, both PWA and controls who exhibited poorer performance on the speech masking condition with spatial cues benefited from the beamformer, while those who achieved better performance with spatial cues did not. All participants benefited from the beamformer in the noise masking condition. The findings suggest that a spatially tuned hearing aid may be beneficial for older listeners with relatively mild hearing loss who have difficulty taking advantage of spatial cues.
Collapse
|
19
|
The effects of target-masker sex mismatch on linguistic release from masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:2006. [PMID: 33138488 PMCID: PMC7556881 DOI: 10.1121/10.0002165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/11/2020] [Accepted: 09/22/2020] [Indexed: 05/23/2023]
Abstract
Listeners often experience challenges understanding an interlocutor (target) in the presence of competing talkers (maskers). However, during linguistic release from masking (LRM), this difficulty decreases for native language targets (English) when paired with different language maskers (e.g., Dutch). There is considerable evidence that the linguistic similarity between target-masker pairs determines the size of LRM. This study investigated whether and how LRM is affected when the streams also differed in talker sex. Experiment 1 investigated intelligibility for English targets in sex-matched and mismatched conditions with Dutch or English maskers. While typical LRM effects were obtained when sex was matched, opposite effects were detected when sex was mismatched. In experiment 2, Mandarin maskers were used to increase linguistic dissimilarity and elicit stronger LRM effects. Despite the greater linguistic dissimilarity, the surprising reverse LRM effect in the sex-mismatch condition persisted. In experiment 3, the target stream was held constant and talker sex and language were manipulated in the masker. Here, expected LRM effects were obtained for both the sex-matched and sex-mismatched conditions. This indicated that the locus of the dissimilarities and not just relative properties affect LRM. Broadly, this study suggests that using naturally varying listening situations advances understanding of factors underlying LRM.
Collapse
|
20
|
Prediction of individual speech recognition performance in complex listening conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1379. [PMID: 32237817 DOI: 10.1121/10.0000759] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Accepted: 01/31/2020] [Indexed: 06/11/2023]
Abstract
This study examined how well individual speech recognition thresholds in complex listening scenarios could be predicted by a current binaural speech intelligibility model. Model predictions were compared with experimental data measured for seven normal-hearing and 23 hearing-impaired listeners who differed widely in their degree of hearing loss, age, as well as performance in clinical speech tests. The experimental conditions included two masker types (multi-talker or two-talker maskers), and two spatial conditions (maskers co-located with the frontal target or symmetrically separated from the target). The results showed that interindividual variability could not be well predicted by a model including only individual audiograms. Predictions improved when an additional individual "proficiency factor" was derived from one of the experimental conditions or a standard speech test. Overall, the current model can predict individual performance relatively well (except in conditions high in informational masking), but the inclusion of age-related factors may lead to even further improvements.
Collapse
|
21
|
The importance of processing resolution in "ideal time-frequency segregation" of masked speech and the implications for predicting speech intelligibility. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1648. [PMID: 32237827 PMCID: PMC7075715 DOI: 10.1121/10.0000893] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 01/28/2020] [Accepted: 02/24/2020] [Indexed: 06/11/2023]
Abstract
Ideal time-frequency segregation (ITFS) is a signal processing technique that may be used to estimate the energetic and informational components of speech-on-speech masking. A core assumption of ITFS is that it roughly emulates the effects of energetic masking (EM) in a speech mixture. Thus, when speech identification thresholds are measured for ITFS-processed stimuli and compared to thresholds for unprocessed stimuli, the difference can be attributed to informational masking (IM). Interpreting this difference as a direct metric of IM, however, is complicated by the fine time-frequency (T-F) resolution typically used during ITFS, which may yield target "glimpses" that are too narrow/brief to be resolved by the ear in the mixture. Estimates of IM, therefore, may be inflated because the full effects of EM are not accounted for. Here, T-F resolution was varied during ITFS to determine if/how estimates of IM depend on processing resolution. Speech identification thresholds were measured for speech and noise maskers after ITFS. Reduced frequency resolution yielded poorer thresholds for both masker types. Reduced temporal resolution did so for noise maskers only. Results suggest that processing resolution strongly influences estimates of IM and implies that current approaches to predicting masked speech intelligibility should be modified to account for IM.
Collapse
|
22
|
Binaural sensitivity and release from speech-on-speech masking in listeners with and without hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1546. [PMID: 32237845 PMCID: PMC7060089 DOI: 10.1121/10.0000812] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 02/07/2020] [Accepted: 02/11/2020] [Indexed: 05/29/2023]
Abstract
Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.
Collapse
|
23
|
Can background noise increase the informational masking in a speech mixture? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:EL144. [PMID: 32113285 PMCID: PMC7015733 DOI: 10.1121/10.0000719] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/05/2019] [Accepted: 01/22/2020] [Indexed: 06/10/2023]
Abstract
This study tested the hypothesis that adding noise to a speech mixture may cause both energetic masking by obscuring parts of the target message and informational masking by impeding the segregation of competing voices. The stimulus was the combination of two talkers-one target and one masker-presented either in quiet or in noise. Target intelligibility was measured in this mixture and for conditions in which the speech was "glimpsed" in order to quantify the energetic masking present. The results suggested that the addition of background noise exacerbated informational masking, primarily by increasing the sparseness of the speech.
Collapse
|
24
|
Effects of Acquired Aphasia on the Recognition of Speech Under Energetic and Informational Masking Conditions. Trends Hear 2019; 23:2331216519884480. [PMID: 31694486 PMCID: PMC7000861 DOI: 10.1177/2331216519884480] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 09/24/2019] [Accepted: 10/01/2019] [Indexed: 11/16/2022] Open
Abstract
Persons with aphasia (PWA) often report difficulty understanding spoken language in noisy environments that require listeners to identify and selectively attend to target speech while ignoring competing background sounds or “maskers.” This study compared the performance of PWA and age-matched healthy controls (HC) on a masked speech identification task and examined the consequences of different types of masking on performance. Twelve PWA and 12 age-matched HC completed a speech identification task comprising three conditions designed to differentiate between the effects of energetic and informational masking on receptive speech processing. The target and masker speech materials were taken from a closed-set matrix-style corpus, and a forced-choice word identification task was used. Target and maskers were spatially separated from one another in order to simulate real-world listening environments and allow listeners to make use of binaural cues for source segregation. Individualized frequency-specific gain was applied to compensate for the effects of hearing loss. Although both groups showed similar susceptibility to the effects of energetic masking, PWA were more susceptible than age-matched HC to the effects of informational masking. Results indicate that this increased susceptibility cannot be attributed to age, hearing loss, or comprehension deficits and is therefore a consequence of acquired cognitive-linguistic impairments associated with aphasia. This finding suggests that aphasia may result in increased difficulty segregating target speech from masker speech, which in turn may have implications for the ability of PWA to comprehend target speech in multitalker environments, such as restaurants, family gatherings, and other everyday situations.
Collapse
|