1
|
Patro C, Monfiletto A, Singer A, Srinivasan NK, Mishra SK. Midlife Speech Perception Deficits: Impact of Extended High-Frequency Hearing, Peripheral Neural Function, and Cognitive Abilities. Ear Hear 2024:00003446-990000000-00269. [PMID: 38556645 DOI: 10.1097/aud.0000000000001504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
OBJECTIVES The objectives of the present study were to investigate the effects of age-related changes in extended high-frequency (EHF) hearing, peripheral neural function, working memory, and executive function on speech perception deficits in middle-aged individuals with clinically normal hearing. DESIGN We administered a comprehensive assessment battery to 37 participants spanning the age range of 20 to 56 years. This battery encompassed various evaluations, including standard and EHF pure-tone audiometry, ranging from 0.25 to 16 kHz. In addition, we conducted auditory brainstem response assessments with varying stimulation rates and levels, a spatial release from masking (SRM) task, and cognitive evaluations that involved the Trail Making test (TMT) for assessing executive function and the Abbreviated Reading Span test (ARST) for measuring working memory. RESULTS The results indicated a decline in hearing sensitivities at EHFs and an increase in completion times for the TMT with age. In addition, as age increased, there was a corresponding decrease in the amount of SRM. The declines in SRM were associated with age-related declines in hearing sensitivity at EHFs and TMT performance. While we observed an age-related decline in wave I responses, this decline was primarily driven by age-related reductions in EHF thresholds. In addition, the results obtained using the ARST did not show an age-related decline. Neither the auditory brainstem response results nor ARST scores were correlated with the amount of SRM. CONCLUSIONS These findings suggest that speech perception deficits in middle age are primarily linked to declines in EHF hearing and executive function, rather than cochlear synaptopathy or working memory.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Angela Monfiletto
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Aviya Singer
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Nirmal Kumar Srinivasan
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Srikanta Kumar Mishra
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
2
|
Anderson SR, Burg E, Suveg L, Litovsky RY. Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. Trends Hear 2024; 28:23312165241229880. [PMID: 38545645 PMCID: PMC10976506 DOI: 10.1177/23312165241229880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 04/01/2024] Open
Abstract
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical School, Aurora, CO, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lukas Suveg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
3
|
Windle R, Dillon H, Heinrich A. A review of auditory processing and cognitive change during normal ageing, and the implications for setting hearing aids for older adults. Front Neurol 2023; 14:1122420. [PMID: 37409017 PMCID: PMC10318159 DOI: 10.3389/fneur.2023.1122420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/02/2023] [Indexed: 07/07/2023] Open
Abstract
Throughout our adult lives there is a decline in peripheral hearing, auditory processing and elements of cognition that support listening ability. Audiometry provides no information about the status of auditory processing and cognition, and older adults often struggle with complex listening situations, such as speech in noise perception, even if their peripheral hearing appears normal. Hearing aids can address some aspects of peripheral hearing impairment and improve signal-to-noise ratios. However, they cannot directly enhance central processes and may introduce distortion to sound that might act to undermine listening ability. This review paper highlights the need to consider the distortion introduced by hearing aids, specifically when considering normally-ageing older adults. We focus on patients with age-related hearing loss because they represent the vast majority of the population attending audiology clinics. We believe that it is important to recognize that the combination of peripheral and central, auditory and cognitive decline make older adults some of the most complex patients seen in audiology services, so they should not be treated as "standard" despite the high prevalence of age-related hearing loss. We argue that a primary concern should be to avoid hearing aid settings that introduce distortion to speech envelope cues, which is not a new concept. The primary cause of distortion is the speed and range of change to hearing aid amplification (i.e., compression). We argue that slow-acting compression should be considered as a default for some users and that other advanced features should be reconsidered as they may also introduce distortion that some users may not be able to tolerate. We discuss how this can be incorporated into a pragmatic approach to hearing aid fitting that does not require increased loading on audiology services.
Collapse
Affiliation(s)
- Richard Windle
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Harvey Dillon
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Department of Linguistics, Macquarie University, North Ryde, NSW, Australia
| | - Antje Heinrich
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Division of Human Communication, Development and Hearing, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
4
|
Roverud E, Villard S, Kidd G. Strength of target source segregation cues affects the outcome of speech-on-speech masking experiments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2780. [PMID: 37140176 PMCID: PMC10319449 DOI: 10.1121/10.0019307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 05/05/2023]
Abstract
In speech-on-speech listening experiments, some means for designating which talker is the "target" must be provided for the listener to perform better than chance. However, the relative strength of the segregation variables designating the target could affect the results of the experiment. Here, we examine the interaction of two source segregation variables-spatial separation and talker gender differences-and demonstrate that the relative strengths of these cues may affect the interpretation of the results. Participants listened to sentence pairs spoken by different-gender target and masker talkers, presented naturally or vocoded (degrading gender cues), either colocated or spatially separated. Target and masker words were temporally interleaved to eliminate energetic masking in either an every-other-word or randomized order of presentation. Results showed that the order of interleaving had no effect on recall performance. For natural speech with strong talker gender cues, spatial separation of sources yielded no improvement in performance. For vocoded speech with degraded talker gender cues, performance improved significantly with spatial separation of sources. These findings reveal that listeners may shift among target source segregation cues contingent on cue viability. Finally, performance was poor when the target was designated after stimulus presentation, indicating strong reliance on the cues.
Collapse
Affiliation(s)
- Elin Roverud
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Sarah Villard
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
5
|
Valderrama JT, de la Torre A, McAlpine D. The hunt for hidden hearing loss in humans: From preclinical studies to effective interventions. Front Neurosci 2022; 16:1000304. [PMID: 36188462 PMCID: PMC9519997 DOI: 10.3389/fnins.2022.1000304] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022] Open
Abstract
Many individuals experience hearing problems that are hidden under a normal audiogram. This not only impacts on individual sufferers, but also on clinicians who can offer little in the way of support. Animal studies using invasive methodologies have developed solid evidence for a range of pathologies underlying this hidden hearing loss (HHL), including cochlear synaptopathy, auditory nerve demyelination, elevated central gain, and neural mal-adaptation. Despite progress in pre-clinical models, evidence supporting the existence of HHL in humans remains inconclusive, and clinicians lack any non-invasive biomarkers sensitive to HHL, as well as a standardized protocol to manage hearing problems in the absence of elevated hearing thresholds. Here, we review animal models of HHL as well as the ongoing research for tools with which to diagnose and manage hearing difficulties associated with HHL. We also discuss new research opportunities facilitated by recent methodological tools that may overcome a series of barriers that have hampered meaningful progress in diagnosing and treating of HHL.
Collapse
Affiliation(s)
- Joaquin T. Valderrama
- National Acoustic Laboratories, Sydney, NSW, Australia
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Joaquin T. Valderrama, ;
| | - Angel de la Torre
- Department of Signal Theory, Telematics and Communications, University of Granada, Granada, Spain
- Research Centre for Information and Communications Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - David McAlpine
- Department of Linguistics, Macquarie University Hearing, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
6
|
Gibbs BE, Bernstein JGW, Brungart DS, Goupell MJ. Effects of better-ear glimpsing, binaural unmasking, and spectral resolution on spatial release from masking in cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1230. [PMID: 36050186 PMCID: PMC9420049 DOI: 10.1121/10.0013746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 06/15/2023]
Abstract
Bilateral cochlear-implant (BICI) listeners obtain less spatial release from masking (SRM; speech-recognition improvement for spatially separated vs co-located conditions) than normal-hearing (NH) listeners, especially for symmetrically placed maskers that produce similar long-term target-to-masker ratios at the two ears. Two experiments examined possible causes of this deficit, including limited better-ear glimpsing (using speech information from the more advantageous ear in each time-frequency unit), limited binaural unmasking (using interaural differences to improve signal-in-noise detection), or limited spectral resolution. Listeners had NH (presented with unprocessed or vocoded stimuli) or BICIs. Experiment 1 compared natural symmetric maskers, idealized monaural better-ear masker (IMBM) stimuli that automatically performed better-ear glimpsing, and hybrid stimuli that added worse-ear information, potentially restoring binaural cues. BICI and NH-vocoded SRM was comparable to NH-unprocessed SRM for idealized stimuli but was 14%-22% lower for symmetric stimuli, suggesting limited better-ear glimpsing ability. Hybrid stimuli improved SRM for NH-unprocessed listeners but degraded SRM for BICI and NH-vocoded listeners, suggesting they experienced across-ear interference instead of binaural unmasking. In experiment 2, increasing the number of vocoder channels did not change NH-vocoded SRM. BICI SRM deficits likely reflect a combination of across-ear interference, limited better-ear glimpsing, and poorer binaural unmasking that stems from cochlear-implant-processing limitations other than reduced spectral resolution.
Collapse
Affiliation(s)
- Bobby E Gibbs
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
7
|
Raghavendra S, Chun H, Lee S, Chen F, Martin BA, Tan CT. Cross-Frequency Coupling in Cortical Processing of Speech. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:25-29. [PMID: 36085847 DOI: 10.1109/embc48229.2022.9871602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This study examines power-power cross-frequency coupling (CFC) between different frequency bands of cortical activity in normal-hearing (NH) listeners and its association to the processing temporal envelope (ENV) and temporal fine structure (TFS) of speech. CFC between alpha and theta bands and between gamma and theta bands was investigated when only ENV or TFS or the original speech itself were processed. Comparing the cortical activity in response to ENV and original speech, there was an increase in alpha-theta CFC and in gamma-theta CFC when listening to ENV alone. However, when comparing the response when to listening TFS alone, there was a reduction in gamma-theta CFC compared to the original speech and the alpha-theta CFC was comparable to the equivalent observed with original speech. The increase in CFC may suggest that there is more synchrony across different bands of cortical activity in processing ENV than TFS. These measures can serve as indicators when either ENV or TFS is perceived.
Collapse
|
8
|
Combining background noise and artificial masking to achieve privacy in sound zones. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
9
|
Oh Y, Hartling CL, Srinivasan NK, Diedesch AC, Gallun FJ, Reiss LAJ. Factors underlying masking release by voice-gender differences and spatial separation cues in multi-talker listening environments in listeners with and without hearing loss. Front Neurosci 2022; 16:1059639. [PMID: 36507363 PMCID: PMC9726925 DOI: 10.3389/fnins.2022.1059639] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 11/07/2022] [Indexed: 11/24/2022] Open
Abstract
Voice-gender differences and spatial separation are important cues for auditory object segregation. The goal of this study was to investigate the relationship of voice-gender difference benefit to the breadth of binaural pitch fusion, the perceptual integration of dichotic stimuli that evoke different pitches across ears, and the relationship of spatial separation benefit to localization acuity, the ability to identify the direction of a sound source. Twelve bilateral hearing aid (HA) users (age from 30 to 75 years) and eleven normal hearing (NH) listeners (age from 36 to 67 years) were tested in the following three experiments. First, speech-on-speech masking performance was measured as the threshold target-to-masker ratio (TMR) needed to understand a target talker in the presence of either same- or different-gender masker talkers. These target-masker gender combinations were tested with two spatial configurations (maskers co-located or 60° symmetrically spatially separated from the target) in both monaural and binaural listening conditions. Second, binaural pitch fusion range measurements were conducted using harmonic tone complexes around a 200-Hz fundamental frequency. Third, absolute localization acuity was measured using broadband (125-8000 Hz) noise and one-third octave noise bands centered at 500 and 3000 Hz. Voice-gender differences between target and maskers improved TMR thresholds for both listener groups in the binaural condition as well as both monaural (left ear and right ear) conditions, with greater benefit in co-located than spatially separated conditions. Voice-gender difference benefit was correlated with the breadth of binaural pitch fusion in the binaural condition, but not the monaural conditions, ruling out a role of monaural abilities in the relationship between binaural fusion and voice-gender difference benefits. Spatial separation benefit was not significantly correlated with absolute localization acuity. In addition, greater spatial separation benefit was observed in NH listeners than in bilateral HA users, indicating a decreased ability of HA users to benefit from spatial release from masking (SRM). These findings suggest that sharp binaural pitch fusion may be important for maximal speech perception in multi-talker environments for both NH listeners and bilateral HA users.
Collapse
Affiliation(s)
- Yonghee Oh
- Department of Otolaryngology and Communicative Disorders, University of Louisville, Louisville, KY, United States
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- *Correspondence: Yonghee Oh,
| | - Curtis L. Hartling
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Nirmal Kumar Srinivasan
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, United States
| | - Anna C. Diedesch
- Department of Communication Sciences and Disorders, Western Washington University, Bellingham, WA, United States
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Lina A. J. Reiss
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
10
|
Flanagan SA, Moore BCJ, Wilson AM, Gabrielczyk FC, MacFarlane A, Mandke K, Goswami U. Development of binaural temporal fine structure sensitivity in children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2967. [PMID: 34717481 DOI: 10.1121/10.0006665] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
The highest frequency for which the temporal fine structure (TFS) of a sinewave can be compared across ears varies between listeners with an upper limit of about 1400 Hz for young normal-hearing adults (YNHA). In this study, binaural TFS sensitivity was investigated for 63 typically developing children, aged 5 years, 6 months to 9 years, 4 months using the temporal fine structure-adaptive frequency (TFS-AF) test of Füllgrabe, Harland, Sęk, and Moore [Int. J. Audiol. 56, 926-935 (2017)]. The test assesses the highest frequency at which an interaural phase difference (IPD) of ϕ° can be distinguished from an IPD of 0°. The values of ϕ were 30° and 180°. The starting frequency was 200 Hz. The thresholds for the children were significantly lower (worse) than the thresholds reported by Füllgrabe, Harland, Sęk, and Moore [Int. J. Audiol. 56, 926-935 (2017)] for YNHA. For both values of ϕ, the median age at which children performed above chance level was significantly higher (p < 0.001) than for those who performed at chance. For the subgroup of 40 children who performed above chance for ϕ = 180°, the linear regression analyses showed that the thresholds for ϕ = 180° increased (improved) significantly with increasing age (p < 0.001) with adult-like thresholds predicted to be reached at 10 years, 2 months of age. The implications for spatial release from masking are discussed.
Collapse
Affiliation(s)
- Sheila A Flanagan
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Brian C J Moore
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Angela M Wilson
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Fiona C Gabrielczyk
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Annabel MacFarlane
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Kanad Mandke
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Usha Goswami
- Department of Psychology, Centre for Neuroscience in Education, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
11
|
Xu N, Luo L, Chen L, Ding Y, Li L. Different binaural processing of the envelope component and the temporal fine structure component of a narrowband noise in rat inferior colliculus. Hear Res 2021; 411:108354. [PMID: 34583218 DOI: 10.1016/j.heares.2021.108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/29/2021] [Accepted: 09/15/2021] [Indexed: 11/18/2022]
Abstract
Complex broadband sounds are decomposed by peripheral auditory filters into a series of relatively narrowband signals, each with a slowly varying envelope (ENV) and a rapidly fluctuating temporal fine structure (TFS). ENV and TFS information at the bilateral ears contribute differentially to auditory perception. However, whether the difference could attribute to mechanisms of binaural integration remains an open question. As a potential neural correlate, subsets of neurons in the central nucleus of the inferior colliculus (ICC) are known to integrate binaural information with binaural inhibition or binaural summation. Therefore, we recorded the frequency-following responses (FFRs) to the ENV and TFS components of narrowband noises in the ICC of anesthetized rats and examined changes in FFR amplitude and stimulus-response coherence under various sound-delivery settings. We showed that binaural FFRENV was predominantly elicited by contralateral inputs and inhibited by ipsilateral inputs, exhibiting a "binaural-inhibition" like property. On the other hand, binaural FFRTFS received a balanced contribution from both sides, echoing the "binaural-summation" mechanism. What is more, binaural FFRENV was significantly correlated with contralateral-evoked but not ipsilateral-evoked FFRENV, while binaural FFRTFS correlated with both contralateral- and ipsilateral-evoked FFRTFS. Overall, these results suggest distinct binaural processing of ENV and TFS information at the midbrain level.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China; Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China.
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China; School of Psychology, Beijing Sport University, Beijing 100084, China
| | - Liangjie Chen
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China
| | - Yu Ding
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China; Division of Sports Science and physical education, Tsinghua University, Beijing 100084, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China; Beijing Institute for Brain Disorders, Beijing 100096, China.
| |
Collapse
|
12
|
Wang C, Wang Z, Xie B, Shi X, Yang P, Liu L, Qu T, Qin Q, Xing Y, Zhu W, Teipel SJ, Jia J, Zhao G, Li L, Tang Y. Binaural processing deficit and cognitive impairment in Alzheimer's disease. Alzheimers Dement 2021; 18:1085-1099. [PMID: 34569690 DOI: 10.1002/alz.12464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/07/2021] [Accepted: 08/05/2021] [Indexed: 01/08/2023]
Abstract
Speech comprehension in noisy environments depends on central auditory functions, which are vulnerable in Alzheimer's disease (AD). Binaural processing exploits two ear sounds to optimally process degraded sound information; its characteristics are poorly understood in AD. We studied behavioral and electrophysiological alterations in binaural processing among 121 participants (AD = 27; amnestic mild cognitive impairment [aMCI] = 33; subjective cognitive decline [SCD] = 30; cognitively normal [CN] = 31). We observed impairment of binaural processing in AD and aMCI, and detected a U-shaped curve change in phase synchrony (declining from CN to SCD and to aMCI, but increasing from aMCI to AD). This improvement in phase synchrony accompanying more severe cognitive stages could reflect neural adaptation for binaural processing. Moreover, increased phase synchrony is associated with worse memory during the stages when neural adaptation apparently occurs. These findings support a hypothesis that neural adaptation for binaural processing deficit may exacerbate cognitive impairment, which could help identify biomarkers and therapeutic targets in AD.
Collapse
Affiliation(s)
- Changming Wang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Zhibin Wang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Beijia Xie
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Xinrui Shi
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Pengcheng Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Lei Liu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China
| | - Tianshu Qu
- Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Qi Qin
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Yi Xing
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| | - Wei Zhu
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Stefan J Teipel
- Department of Psychosomatic Medicine, University Medicine Rostock, Rostock, Germany.,DZNE, German Center for Neurodegenerative Diseases, Rostock, Germany
| | - Jianping Jia
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China.,Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing, China.,Beijing Key Laboratory of Geriatric Cognitive Disorders, Beijing, China.,National Clinical Research Center for Geriatric Disorders, Beijing, China
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.,Speech and Hearing Research Center, Peking University, Beijing, China.,Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| | - Yi Tang
- Innovation Center for Neurological Disorders, Department of Neurology, Xuanwu Hospital, Capital Medical University, National Center for Neurological Disorders, Beijing, China.,Key Laboratory of Neurodegenerative Diseases, Ministry of Education of the People's Republic of China, Beijing, China
| |
Collapse
|
13
|
Villard S, Kidd G. Speech intelligibility and talker gender classification with noise-vocoded and tone-vocoded speech. JASA EXPRESS LETTERS 2021; 1:094401. [PMID: 34590078 PMCID: PMC8456348 DOI: 10.1121/10.0006285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 08/21/2021] [Indexed: 05/21/2023]
Abstract
Vocoded speech provides less spectral information than natural, unprocessed speech, negatively affecting listener performance on speech intelligibility and talker gender classification tasks. In this study, young normal-hearing participants listened to noise-vocoded and tone-vocoded (i.e., sinewave-vocoded) sentences containing 1, 2, 4, 8, 16, or 32 channels, as well as non-vocoded sentences, and reported the words heard as well as the gender of the talker. Overall, performance was significantly better with tone-vocoded than noise-vocoded speech for both tasks. Within the talker gender classification task, biases in performance were observed for lower numbers of channels, especially when using the noise carrier.
Collapse
Affiliation(s)
- Sarah Villard
- Department of Speech, Language and Hearing Sciences & Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, ,
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences & Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, ,
| |
Collapse
|
14
|
Yun D, Jennings TR, Kidd G, Goupell MJ. Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
Affiliation(s)
- David Yun
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Todd R Jennings
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
15
|
Marrufo-Pérez MI, Araquistain-Serrat L, Eustaquio-Martín A, Lopez-Poveda EA. On the importance of interaural noise coherence and the medial olivocochlear reflex for binaural unmasking in free-field listening. Hear Res 2021; 405:108246. [PMID: 33872834 DOI: 10.1016/j.heares.2021.108246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 03/25/2021] [Accepted: 03/31/2021] [Indexed: 11/15/2022]
Abstract
For speech in competition with a noise source in the free field, normal-hearing (NH) listeners recognize speech better when listening binaurally than when listening monaurally with the ear that has the better acoustic signal-to-noise ratio (SNR). This benefit from listening binaurally is known as binaural unmasking and indicates that the brain combines information from the two ears to improve intelligibility. Here, we address three questions pertaining to binaural unmasking for NH listeners. First, we investigate if binaural unmasking results from combining the speech and/or the noise from the two ears. In a simulated acoustic free field with speech and noise sources at 0° and 270°azimuth, respectively, we found comparable unmasking regardless of whether the speech was present or absent in the ear with the worse SNR. This indicates that binaural unmasking probably involves combining only the noise at the two ears. Second, we investigate if having binaurally coherent location cues for the noise signal is sufficient for binaural unmasking to occur. We found no unmasking when location cues were coherent but noise signals were generated incoherent or were processed unilaterally through a hearing aid with linear, minimal amplification. This indicates that binaural unmasking requires interaurally coherent noise signals, source location cues, and processing. Third, we investigate if the hypothesized antimasking benefits of the medial olivocochlear reflex (MOCR) contribute to binaural unmasking. We found comparable unmasking regardless of whether speech tokens (words) were sufficiently delayed from the noise onset to fully activate the MOCR or not. Moreover, unmasking was absent when the noise was binaurally incoherent whereas the physiological antimasking effects of the MOCR are similar for coherent and incoherent noises. This indicates that the MOCR is unlikely involved in binaural unmasking.
Collapse
Affiliation(s)
- Miriam I Marrufo-Pérez
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Leire Araquistain-Serrat
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain
| | - Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain; Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca 37007, Spain; Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca 37007, Spain.
| |
Collapse
|
16
|
Schroeer A, Corona-Strauss FI, Ozdamar O, Bohorquez J, Strauss DJ. Speech induced binaural beats: Electrophysiological assessment of binaural interaction. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:788. [PMID: 33639814 DOI: 10.1121/10.0003442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 01/14/2021] [Indexed: 06/12/2023]
Abstract
This paper introduces and evaluates a speech signal manipulation scheme that generates transient speech induced binaural beats (SBBs). These SBBs can only be perceived when different signals are presented dichotically (to both ears). Event-related potentials were recorded in 22 normal-hearing subjects. Dichotic stimulus presentation reliably evoked auditory late responses (ALRs) in all subjects using such manipulated signals. As control measurements, diotic stimulation modalities were presented to confirm that the ALRs were not evoked by the speech signal itself or that the signal manipulation scheme created audible artifacts. Since diotic measurements evoked no ALRs, responses from dichotic stimulation are a pure correlate of binaural interaction. While there are several auditory stimuli (mostly modulated sinusoids or noise) that share this characteristic, none of them are based on running speech. Because SBBs can be added to any arbitrary speech signal, they could easily be combined with psychoacoustic tests, for example speech reception thresholds, adding an objective measure of binaural interaction.
Collapse
Affiliation(s)
- Andreas Schroeer
- Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University and School of Engineering, htw saar, 66421 Homburg/Saar, Germany
| | - Farah I Corona-Strauss
- Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University and School of Engineering, htw saar, 66421 Homburg/Saar, Germany
| | - Ozcan Ozdamar
- Department of Biomedical Engineering, College of Engineering, University of Miami, McArthur Engineering Building, 1251 Memorial Drive, Coral Gables, Florida 33124, USA
| | - Jorge Bohorquez
- Department of Biomedical Engineering, College of Engineering, University of Miami, McArthur Engineering Building, 1251 Memorial Drive, Coral Gables, Florida 33124, USA
| | - Daniel J Strauss
- Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University and School of Engineering, htw saar, 66421 Homburg/Saar, Germany
| |
Collapse
|
17
|
Binaural Frequency Modulation Detection in School-Age Children, Young Adults, and Older Adults: Effects of Interaural Modulator Phase. Ear Hear 2020; 42:691-699. [PMID: 33306546 DOI: 10.1097/aud.0000000000000975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to measure low-rate binaural frequency modulation (FM) detection across the lifespan as a gauge of temporal fine structure processing. Children and older adults were expected to perform more poorly than young adults but for different reasons. DESIGN Detection of 2-Hz FM carried by a 500-Hz pure tone was measured for modulators that were either in-phase or out-of-phase across ears. Thresholds were measured in quiet and in noise. Participants were school-age children (n = 44), young adults (n = 11), and older adults (n = 17) with normal or near-normal hearing. RESULTS Thresholds were lower for out-of-phase than in-phase modulators among all listening groups. Detection thresholds improved with child age, with larger effects of age for dichotic than diotic FM. Introduction of masking noise tended to elevate thresholds; this effect was larger for the dichotic condition than the diotic condition, and larger for older adults than young adults. In noise, young adults received the greatest dichotic benefit, followed by older adults, then young children. The relative effects of noise on dichotic benefit did not differ for young adults compared to young children and older adults; however, young children saw greater reduction in benefit due to noise than older adults. CONCLUSION The difference in dichotic benefit between children and young adults is consistent with maturation of central auditory processing. Differences in the effect of noise on dichotic benefit in young children and older adults support the idea that different factors or combinations of factors limit performance in these two groups. Although dichotic FM detection appears to be more sensitive to the effects of development and aging than diotic FM detection, the positive correlation between diotic and dichotic FM detection thresholds for all listeners suggests contribution of one or more factors common to both conditions.
Collapse
|
18
|
Kidd G, Jennings TR, Byrne AJ. Enhancing the perceptual segregation and localization of sound sources with a triple beamformer. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3598. [PMID: 33379918 PMCID: PMC8097713 DOI: 10.1121/10.0002779] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/04/2020] [Accepted: 11/09/2020] [Indexed: 06/01/2023]
Abstract
A triple beamformer was developed to exploit the capabilities of the binaural auditory system. The goal was to enhance the perceptual segregation of spatially separated sound sources while preserving source localization. The triple beamformer comprised a variant of a standard single-channel beamformer that routes the primary beam output focused on the target source location to both ears. The triple beam algorithm adds two supplementary beams with the left-focused beam routed only to the left ear and the right-focused beam routed only to the right ear. The rationale for the approach is that the triple beam processing exploits sound source segregation in high informational masking (IM) conditions. Furthermore, the exaggerated interaural level differences produced by the triple beam are well-suited for categories of listeners (e.g., bilateral cochlear implant users) who receive limited benefit from interaural time differences. The performance with the triple beamformer was compared to normal binaural hearing (simulated using a Knowles Electronic Manikin for Auditory Research, G.R.A.S. Sound and Vibration, Holte, DK) and to that obtained from a single-channel beamformer. Source localization in azimuth and masked speech identification for multiple masker locations were measured for all three algorithms. Taking both localization and speech intelligibility into account, the triple beam algorithm was considered to be advantageous under high IM listening conditions.
Collapse
Affiliation(s)
- Gerald Kidd
- Department of Speech, Language and Hearing Sciences and Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Todd R Jennings
- Department of Speech, Language and Hearing Sciences and Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Andrew J Byrne
- Department of Speech, Language and Hearing Sciences and Hearing Research Center, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
19
|
The Temporal Fine Structure of Background Noise Determines the Benefit of Bimodal Hearing for Recognizing Speech. J Assoc Res Otolaryngol 2020; 21:527-544. [PMID: 33104927 PMCID: PMC7644728 DOI: 10.1007/s10162-020-00772-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 10/14/2020] [Indexed: 01/01/2023] Open
Abstract
Cochlear implant (CI) users have more difficulty understanding speech in temporally modulated noise than in steady-state (SS) noise. This is thought to be caused by the limited low-frequency information that CIs provide, as well as by the envelope coding in CIs that discards the temporal fine structure (TFS). Contralateral amplification with a hearing aid, referred to as bimodal hearing, can potentially provide CI users with TFS cues to complement the envelope cues provided by the CI signal. In this study, we investigated whether the use of a CI alone provides access to only envelope cues and whether acoustic amplification can provide additional access to TFS cues. To this end, we evaluated speech recognition in bimodal listeners, using SS noise and two amplitude-modulated noise types, namely babble noise and amplitude-modulated steady-state (AMSS) noise. We hypothesized that speech recognition in noise depends on the envelope of the noise, but not on its TFS when listening with a CI. Secondly, we hypothesized that the amount of benefit gained by the addition of a contralateral hearing aid depends on both the envelope and TFS of the noise. The two amplitude-modulated noise types decreased speech recognition more effectively than SS noise. Against expectations, however, we found that babble noise decreased speech recognition more effectively than AMSS noise in the CI-only condition. Therefore, we rejected our hypothesis that TFS is not available to CI users. In line with expectations, we found that the bimodal benefit was highest in babble noise. However, there was no significant difference between the bimodal benefit obtained in SS and AMSS noise. Our results suggest that a CI alone can provide TFS cues and that bimodal benefits in noise depend on TFS, but not on the envelope of the noise.
Collapse
|
20
|
Baltzell LS, Swaminathan J, Cho AY, Lavandier M, Best V. Binaural sensitivity and release from speech-on-speech masking in listeners with and without hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1546. [PMID: 32237845 PMCID: PMC7060089 DOI: 10.1121/10.0000812] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 02/07/2020] [Accepted: 02/11/2020] [Indexed: 05/29/2023]
Abstract
Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jayaganesh Swaminathan
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Adrian Y Cho
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Mathieu Lavandier
- University of Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue Maurice Audin, F-69518 Vaulx-en-Velin Cedex, France
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
21
|
Luo L, Xu N, Wang Q, Li L. Disparity in interaural time difference improves the accuracy of neural representations of individual concurrent narrowband sounds in rat inferior colliculus and auditory cortex. J Neurophysiol 2020; 123:695-706. [PMID: 31891521 DOI: 10.1152/jn.00284.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The central mechanisms underlying binaural unmasking for spectrally overlapping concurrent sounds, which are unresolved in the peripheral auditory system, remain largely unknown. In this study, frequency-following responses (FFRs) to two binaurally presented independent narrowband noises (NBNs) with overlapping spectra were recorded simultaneously in the inferior colliculus (IC) and auditory cortex (AC) in anesthetized rats. The results showed that for both IC FFRs and AC FFRs, introducing an interaural time difference (ITD) disparity between the two concurrent NBNs enhanced the representation fidelity, reflected by the increased coherence between the responses evoked by double-NBN stimulation and the responses evoked by single NBNs. The ITD disparity effect varied across frequency bands, being more marked for higher frequency bands in the IC and lower frequency bands in the AC. Moreover, the coherence between IC responses and AC responses was also enhanced by the ITD disparity, and the enhancement was most prominent for low-frequency bands and the IC and the AC on the same side. These results suggest a critical role of the ITD cue in the neural segregation of spectrotemporally overlapping sounds.NEW & NOTEWORTHY When two spectrally overlapped narrowband noises are presented at the same time with the same sound-pressure level, they mask each other. Introducing a disparity in interaural time difference between these two narrowband noises improves the accuracy of the neural representation of individual sounds in both the inferior colliculus and the auditory cortex. The lower frequency signal transformation from the inferior colliculus to the auditory cortex on the same side is also enhanced, showing the effect of binaural unmasking.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| |
Collapse
|
22
|
Heeringa AN, Zhang L, Ashida G, Beutelmann R, Steenken F, Köppl C. Temporal Coding of Single Auditory Nerve Fibers Is Not Degraded in Aging Gerbils. J Neurosci 2020. [PMID: 31719164 DOI: 10.1101/2020.02.10.942011] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023] Open
Abstract
People suffering from age-related hearing loss typically present with deficits in temporal processing tasks. Temporal processing deficits have also been shown in single-unit studies at the level of the auditory brainstem, midbrain, and cortex of aged animals. In this study, we explored whether temporal coding is already affected at the level of the input to the central auditory system. Single-unit auditory nerve fiber recordings were obtained from 41 Mongolian gerbils of either sex, divided between young, middle-aged, and old gerbils. Temporal coding quality was evaluated as vector strength in response to tones at best frequency, and by constructing shuffled and cross-stimulus autocorrelograms, and reverse correlations, from responses to 1 s noise bursts at 10-30 dB sensation level (dB above threshold). At comparable sensation levels, all measures showed that temporal coding was not altered in auditory nerve fibers of aging gerbils. Furthermore, both temporal fine structure and envelope coding remained unaffected. However, spontaneous rates were decreased in aging gerbils. Importantly, despite elevated pure tone thresholds, the frequency tuning of auditory nerve fibers was not affected. These results suggest that age-related temporal coding deficits arise more centrally, possibly due to a loss of auditory nerve fibers (or their peripheral synapses) but not due to qualitative changes in the responses of remaining auditory nerve fibers. The reduced spontaneous rate and elevated thresholds, but normal frequency tuning, of aged auditory nerve fibers can be explained by the well known reduction of endocochlear potential due to strial dysfunction in aged gerbils.SIGNIFICANCE STATEMENT As our society ages, age-related hearing deficits become ever more prevalent. Apart from decreased hearing sensitivity, elderly people often suffer from a reduced ability to communicate in daily settings, which is thought to be caused by known age-related deficits in auditory temporal processing. The current study demonstrated, using several different stimuli and analysis techniques, that these putative temporal processing deficits are not apparent in responses of single-unit auditory nerve fibers of quiet-aged gerbils. This suggests that age-related temporal processing deficits may develop more central to the auditory nerve, possibly due to a reduced population of active auditory nerve fibers, which will be of importance for the development of treatments for age-related hearing disorders.
Collapse
Affiliation(s)
- Amarins N Heeringa
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| | - Lichun Zhang
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| | - Go Ashida
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| | - Rainer Beutelmann
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| | - Friederike Steenken
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| | - Christine Köppl
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University Oldenburg, 26129 Oldenburg, Germany
| |
Collapse
|
23
|
Temporal Coding of Single Auditory Nerve Fibers Is Not Degraded in Aging Gerbils. J Neurosci 2019; 40:343-354. [PMID: 31719164 DOI: 10.1523/jneurosci.2784-18.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 10/25/2019] [Accepted: 11/04/2019] [Indexed: 02/03/2023] Open
Abstract
People suffering from age-related hearing loss typically present with deficits in temporal processing tasks. Temporal processing deficits have also been shown in single-unit studies at the level of the auditory brainstem, midbrain, and cortex of aged animals. In this study, we explored whether temporal coding is already affected at the level of the input to the central auditory system. Single-unit auditory nerve fiber recordings were obtained from 41 Mongolian gerbils of either sex, divided between young, middle-aged, and old gerbils. Temporal coding quality was evaluated as vector strength in response to tones at best frequency, and by constructing shuffled and cross-stimulus autocorrelograms, and reverse correlations, from responses to 1 s noise bursts at 10-30 dB sensation level (dB above threshold). At comparable sensation levels, all measures showed that temporal coding was not altered in auditory nerve fibers of aging gerbils. Furthermore, both temporal fine structure and envelope coding remained unaffected. However, spontaneous rates were decreased in aging gerbils. Importantly, despite elevated pure tone thresholds, the frequency tuning of auditory nerve fibers was not affected. These results suggest that age-related temporal coding deficits arise more centrally, possibly due to a loss of auditory nerve fibers (or their peripheral synapses) but not due to qualitative changes in the responses of remaining auditory nerve fibers. The reduced spontaneous rate and elevated thresholds, but normal frequency tuning, of aged auditory nerve fibers can be explained by the well known reduction of endocochlear potential due to strial dysfunction in aged gerbils.SIGNIFICANCE STATEMENT As our society ages, age-related hearing deficits become ever more prevalent. Apart from decreased hearing sensitivity, elderly people often suffer from a reduced ability to communicate in daily settings, which is thought to be caused by known age-related deficits in auditory temporal processing. The current study demonstrated, using several different stimuli and analysis techniques, that these putative temporal processing deficits are not apparent in responses of single-unit auditory nerve fibers of quiet-aged gerbils. This suggests that age-related temporal processing deficits may develop more central to the auditory nerve, possibly due to a reduced population of active auditory nerve fibers, which will be of importance for the development of treatments for age-related hearing disorders.
Collapse
|
24
|
Teng X, Cogan GB, Poeppel D. Speech fine structure contains critical temporal cues to support speech segmentation. Neuroimage 2019; 202:116152. [PMID: 31484039 DOI: 10.1016/j.neuroimage.2019.116152] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 08/10/2019] [Accepted: 08/31/2019] [Indexed: 11/16/2022] Open
Abstract
Segmenting the continuous speech stream into units for further perceptual and linguistic analyses is fundamental to speech recognition. The speech amplitude envelope (SE) has long been considered a fundamental temporal cue for segmenting speech. Does the temporal fine structure (TFS), a significant part of speech signals often considered to contain primarily spectral information, contribute to speech segmentation? Using magnetoencephalography, we show that the TFS entrains cortical responses between 3 and 6 Hz and demonstrate, using mutual information analysis, that (i) the temporal information in the TFS can be reconstructed from a measure of frame-to-frame spectral change and correlates with the SE and (ii) that spectral resolution is key to the extraction of such temporal information. Furthermore, we show behavioural evidence that, when the SE is temporally distorted, the TFS provides cues for speech segmentation and aids speech recognition significantly. Our findings show that it is insufficient to investigate solely the SE to understand temporal speech segmentation, as the SE and the TFS derived from a band-filtering method convey comparable, if not inseparable, temporal information. We argue for a more synthetic view of speech segmentation - the auditory system groups speech signals coherently in both temporal and spectral domains.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, 60322, Germany.
| | - Gregory B Cogan
- Department of Neurosurgery, Duke University, Durham, NC, USA, 27710
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, 60322, Germany; Department of Psychology, New York University, New York, NY, USA, 10003
| |
Collapse
|
25
|
Vercammen C, Goossens T, Undurraga J, Wouters J, van Wieringen A. Electrophysiological and Behavioral Evidence of Reduced Binaural Temporal Processing in the Aging and Hearing Impaired Human Auditory System. Trends Hear 2019; 22:2331216518785733. [PMID: 30022734 PMCID: PMC6053861 DOI: 10.1177/2331216518785733] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A person’s ability to process temporal fine structure information is
indispensable for speech understanding. As speech understanding typically
deteriorates throughout adult life, this study aimed to disentangle age and
hearing impairment (HI)-related changes in binaural temporal processing. This
was achieved by examining neural and behavioral processing of interaural phase
differences (IPDs). Neural IPD processing was studied electrophysiologically
through steady-state activity in the electroencephalogram evoked by periodic
changes in IPDs over time, embedded in the temporal fine structure of acoustic
stimulation. In addition, behavioral IPD discrimination thresholds were
determined for the same stimuli. To disentangle potential effects of age from
those of HI, both measures were applied to six participant groups: young,
middle-aged, and older persons, with either normal hearing or sensorineural HI.
All participants passed a cognitive screening, and stimulus audibility was
controlled for in participants with HI. The results demonstrated that HI changes
neural processing of binaural temporal information for all age-groups included
in this study. These outcomes were revealed, superimposed on age-related changes
that emerge between young adulthood and middle age. Poorer neural outcomes were
also associated with poorer behavioral performance, even though the behavioral
IPD discrimination thresholds were affected by age rather than by HI. The neural
outcomes of this study are the first to evidence and disentangle the dual load
of age and HI on binaural temporal processing. These results could be a valuable
first step toward future research on rehabilitation.
Collapse
Affiliation(s)
- Charlotte Vercammen
- 1 Department of Neurosciences, Research Group Experimental Oto-Rhino-Laryngology, KU Leuven-University of Leuven, Belgium
| | - Tine Goossens
- 1 Department of Neurosciences, Research Group Experimental Oto-Rhino-Laryngology, KU Leuven-University of Leuven, Belgium
| | - Jaime Undurraga
- 2 Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia.,3 Ear Institute, University College London, London, UK
| | - Jan Wouters
- 1 Department of Neurosciences, Research Group Experimental Oto-Rhino-Laryngology, KU Leuven-University of Leuven, Belgium
| | - Astrid van Wieringen
- 1 Department of Neurosciences, Research Group Experimental Oto-Rhino-Laryngology, KU Leuven-University of Leuven, Belgium
| |
Collapse
|
26
|
Chen H, Xing Y, Zhang Z, Tao S, Wang H, Aiken S, Yin S, Yu D, Wang J. Coding-in-Noise Deficits are Not Seen in Responses to Amplitude Modulation in Subjects with cochlear Synaptopathy Induced by a Single Noise Exposure. Neuroscience 2019; 400:62-71. [PMID: 30615912 DOI: 10.1016/j.neuroscience.2018.12.048] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 12/22/2018] [Accepted: 12/27/2018] [Indexed: 11/30/2022]
Abstract
Since the first report of noise-induced synaptic damage in animals without permanent threshold shifts (PTSs), the concept of noise-induced hidden hearing loss (NIHHL) has been proposed to cover the functional deficits in hearing associated with noise-induced synaptopathy. Moreover, the potential functional deficit associated with the noise-induced synaptopathy has been largely attributed to the loss of auditory nerve fibers (ANFs) with a low spontaneous spike rate (SSR). As this group of ANFs is critical for coding at suprathreshold levels and in noisy background, coding-in-noise deficit (CIND) has been considered to be main consequence of the synaptopathy. However, such deficits have not been verified after a single, brief exposure to noise without PTS. In the present study, synaptopathy was generated by such noise exposure in both mice and guinea pigs. Responses to amplitude modulation (AM) were recorded at a high sound level in combination with masking to evaluate the existence of CINDs that might be associated with loss of low-SSR ANFs. An overall reduction in response amplitude was seen in AM-evoked compound action potential (CAP). However, no such reduction was seen in the scalp-recorded envelope following response (EFR), suggesting a compensation due to increased central gain. Moreover, there was no significant difference in masking effect between the control and noise groups. The results suggest that either there is no significant CIND after the synaptopathy we created, or the AM response tested with our protocol was not sufficiently sensitive to detect such a deficit; far-field EFR is not sensitive to cochlear pathology.
Collapse
Affiliation(s)
- Hengchao Chen
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Yazhi Xing
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Zhen Zhang
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Shan Tao
- Department of Neonatal Pediatrics, Children's Hospital, Xiamen, China
| | - Hui Wang
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Steve Aiken
- School of Communication Sciences and Disorder, Dalhousie University, Halifiax, Canada
| | - Shankai Yin
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
| | - Dongzhen Yu
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China.
| | - Jian Wang
- Otolaryngology Research Institute, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China; School of Communication Sciences and Disorder, Dalhousie University, Halifiax, Canada.
| |
Collapse
|
27
|
Effects of lifetime noise exposure on the middle-age human auditory brainstem response, tinnitus and speech-in-noise intelligibility. Hear Res 2018; 365:36-48. [DOI: 10.1016/j.heares.2018.06.003] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/25/2018] [Accepted: 06/08/2018] [Indexed: 01/03/2023]
|
28
|
Lotfi Y, Ahmadi T, Moossavi A, Bakhshi E. Binaural sensitivity to temporal fine structure and lateralization ability in children with suspected (central) auditory processing disorder. Auris Nasus Larynx 2018; 46:64-69. [PMID: 29954636 DOI: 10.1016/j.anl.2018.06.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 06/11/2018] [Accepted: 06/17/2018] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Previous studies have shown that a subgroup of children with suspected (central) auditory processing disorder (SusCAPD) have insufficient ability to use binaural cues to benefit from spatial processing. Thus, they experience considerable listening difficulties in challenging auditory environments, such as classrooms. Some researchers have also indicated the probable role of binaural temporal fine structure (TFS) in the perceptual segregation of target signal from noise and hence in speech perception in noise. Therefore, in the present study, in order to further investigate the underlying reason for listening problems against background noise in this group of children, their performance was measured using binaural TFS sensitivity test (TFS-LF) as well as behavioral auditory lateralization in noise test, both of which are based on binaural temporal cues processing. METHODS Participants in this analytical study included 91 children with normal hearing and no listening problems and 41 children (9-12 years old) with SusCAPD who found it challenging to understand speech in noise. Initially, the ability to use binaural TFS was measured at three frequencies (250, 500 and 750Hz) in both the groups, and the results of preliminary evaluations were compared between normal children and those with SusCAPD who participated in the study. Thereafter, the binaural performance of the 16 children with SusCAPD who had higher thresholds than the normal group at all three frequencies tested in TFS-LF test was examined using the lateralization test in 7 spatial locations. RESULTS Total 16 of the 41 children with SusCAPD who participated in this study (39%) showed poor performance on the TFS-LF test at all three frequencies, compared to both normal children and other children in the APD group (p<0.05). Furthermore, children in the APD group with binaural TFS coding deficits at all three frequencies revealed significant differences in the lateralization test results compared to normal children (p<0.05). CONCLUSION Findings of the current study demonstrated that one of the underlying causes for the difficulty understanding speech in noisy environments experienced by a subgroup of children with SusCAPD can be the reduced ability to benefit from binaural TFS information. This study also showed that a reduced ability to use binaural TFS cues in the group of children with SusCAPD was accompanied by reduced binaural processing abilities in the lateralization test which also admit the presence of binaural temporal processing deficits in this group of children.
Collapse
Affiliation(s)
- Yones Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Tayebeh Ahmadi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Abdollah Moossavi
- Department of Otolaryngology, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Statistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| |
Collapse
|
29
|
Neural representation of interaural correlation in human auditory brainstem: Comparisons between temporal-fine structure and envelope. Hear Res 2018; 365:165-173. [PMID: 29853322 DOI: 10.1016/j.heares.2018.05.015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2017] [Revised: 05/05/2018] [Accepted: 05/20/2018] [Indexed: 11/24/2022]
Abstract
Central processing of interaural correlation (IAC), which depends on the precise representation of acoustic signals from the two ears, is essential for both localization and recognition of auditory objects. A complex soundwave is initially filtered by the peripheral auditory system into multiple narrowband waves, which are further decomposed into two functionally distinctive components: the quickly-varying temporal-fine structure (TFS) and the slowly-varying envelope. In rats, a narrowband noise can evoke auditory-midbrain frequency-following responses (FFRs) that contain both the TFS component (FFRTFS) and the envelope component (FFREnv), which represent the TFS and envelope of the narrowband noise, respectively. These two components are different in sensitivity to the interaural time disparity. In human listeners, the present study investigated whether the FFRTFS and FFREnv components of brainstem FFRs to a narrowband noise are different in sensitivity to IAC and whether there are potential brainstem mechanisms underlying the integration of the two components. The results showed that although both the amplitude of FFRTFS and that of FFREnv were significantly affected by shifts of IAC between 1 and 0, the stimulus-to-response correlation for FFRTFS, but not that for FFREnv, was sensitive to the IAC shifts. Moreover, in addition to the correlation between the binaurally evoked FFRTFS and FFREnv, the correlation between the IAC-shift-induced change of FFRTFS and that of FFREnv was significant. Thus, the TFS information is more precisely represented in the human auditory brainstem than the envelope information, and the correlation between FFRTFS and FFREnv for the same narrowband noise suggest a brainstem binding mechanism underlying the perceptual integration of the TFS and envelope signals.
Collapse
|
30
|
Factors Affecting Speech Reception in Background Noise with a Vocoder Implementation of the FAST Algorithm. J Assoc Res Otolaryngol 2018; 19:467-478. [PMID: 29744731 DOI: 10.1007/s10162-018-0672-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 04/23/2018] [Indexed: 10/16/2022] Open
Abstract
Speech segregation in background noise remains a difficult task for individuals with hearing loss. Several signal processing strategies have been developed to improve the efficacy of hearing assistive technologies in complex listening environments. The present study measured speech reception thresholds in normal-hearing listeners attending to a vocoder based on the Fundamental Asynchronous Stimulus Timing algorithm (FAST: Smith et al. 2014), which triggers pulses based on the amplitudes of channel magnitudes in order to preserve envelope timing cues, with two different reconstruction bandwidths (narrowband and broadband) to control the degree of spectrotemporal resolution. Five types of background noise were used including same male talker, female talker, time-reversed male talker, time-reversed female talker, and speech-shaped noise to probe the contributions of different types of speech segregation cues and to elucidate how degradation affects speech reception across these conditions. Maskers were spatialized using head-related transfer functions in order to create co-located and spatially separated conditions. Results indicate that benefits arising from voicing and spatial cues can be preserved using the FAST algorithm but are reduced with a reduction in spectral resolution.
Collapse
|
31
|
Xia J, Xu B, Pentony S, Xu J, Swaminathan J. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:1523. [PMID: 29604671 DOI: 10.1121/1.5026788] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.
Collapse
Affiliation(s)
- Jing Xia
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704, USA
| | - Buye Xu
- Starkey Hearing Technologies, 6600 Washington Avenue South, Eden Prairie, Minnesota 55344, USA
| | - Shareka Pentony
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704, USA
| | - Jingjing Xu
- Starkey Hearing Technologies, 6600 Washington Avenue South, Eden Prairie, Minnesota 55344, USA
| | - Jayaganesh Swaminathan
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704, USA
| |
Collapse
|
32
|
Bissmeyer SRS, Goldsworthy RL. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1441. [PMID: 28964069 PMCID: PMC8267853 DOI: 10.1121/1.5002691] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Revised: 08/08/2017] [Accepted: 08/28/2017] [Indexed: 06/02/2023]
Abstract
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
Collapse
Affiliation(s)
- Susan R S Bissmeyer
- Caruso Department of Otolaryngology, Caruso Center for Childhood Communication, University of Southern California, 806 West Adams Boulevard, Los Angeles, California 90007, USA
| | - Raymond L Goldsworthy
- Caruso Department of Otolaryngology, Caruso Center for Childhood Communication, University of Southern California, 806 West Adams Boulevard, Los Angeles, California 90007, USA
| |
Collapse
|
33
|
Neural representations of concurrent sounds with overlapping spectra in rat inferior colliculus: Comparisons between temporal-fine structure and envelope. Hear Res 2017; 353:87-96. [PMID: 28655419 DOI: 10.1016/j.heares.2017.06.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Revised: 05/21/2017] [Accepted: 06/12/2017] [Indexed: 11/24/2022]
Abstract
Perceptual segregation of multiple sounds, which overlap in both time and spectra, into individual auditory streams is critical for hearing in natural environments. Some cues such as interaural time disparities (ITDs) play an important role in the segregation, especially when sounds are separated in space. In this study, we investigated the neural representation of two uncorrelated narrowband noises that shared the identical spectrum in the rat inferior colliculus (IC) using frequency-following-response (FFR) recordings, when the ITD for each noise stimulus was manipulated. The results of this study showed that recorded FFRs exhibited two distinctive components: the fast-varying temporal fine structure (TFS) component (FFRTFS) and the slow-varying envelope component (FFRENV). When a single narrowband noise was presented alone, the FFRTFS, but not the FFRENV, was sensitive to ITDs. When two narrowband noises were presented simultaneously, the FFRTFS took advantage of the ITD disparity that was associated with perceived spatial separation between the two concurrent sounds, and displayed a better linear synchronization to the sound with an ipsilateral-leading ITD. However, no effects of ITDs were found on the FFRENV. These results suggest that the FFRTFS and FFRENV represent two distinct types of signal processing in the auditory brainstem and contribute differentially to sound segregation based on spatial cues: the FFRTFS is more critical to spatial release from masking.
Collapse
|
34
|
King A, Hopkins K, Plack CJ, Pontoppidan NH, Bramsløw L, Hietkamp RK, Vatti M, Hafez A. The effect of tone-vocoding on spatial release from masking for old, hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2591. [PMID: 28464637 DOI: 10.1121/1.4979593] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Old, hearing-impaired listeners generally benefit little from lateral separation of multiple talkers when listening to one of them. This study aimed to determine how spatial release from masking (SRM) in such listeners is affected when the interaural time differences (ITDs) in the temporal fine structure (TFS) are manipulated by tone-vocoding (TVC) at the ears by a master hearing aid system. Word recall was compared, with and without TVC, when target and masker sentences from a closed set were played simultaneously from the front loudspeaker (co-located) and when the maskers were played 45° to the left and right of the listener (separated). For 20 hearing-impaired listeners aged 64 to 86, SRM was 3.7 dB smaller with TVC than without TVC. This difference in SRM correlated with mean audiometric thresholds below 1.5 kHz, even when monaural TFS sensitivity (discrimination of frequency-shifts in identically filtered complexes) was partialed out, suggesting that low-frequency audiometric thresholds may be a good indicator of candidacy for hearing aids that preserve ITDs. The TVC difference in SRM was not correlated with age, pure-tone ITD thresholds, nor fundamental frequency difference limens, and only with monaural TFS sensitivity before control for low-frequency audiometric thresholds.
Collapse
MESH Headings
- Acoustic Stimulation
- Age Factors
- Aged
- Aged, 80 and over
- Aging/psychology
- Audiometry, Pure-Tone
- Audiometry, Speech
- Auditory Threshold
- Correction of Hearing Impairment/instrumentation
- Cues
- Female
- Hearing
- Hearing Aids
- Hearing Loss, Bilateral/diagnosis
- Hearing Loss, Bilateral/physiopathology
- Hearing Loss, Bilateral/psychology
- Hearing Loss, Bilateral/rehabilitation
- Hearing Loss, Sensorineural/diagnosis
- Hearing Loss, Sensorineural/physiopathology
- Hearing Loss, Sensorineural/psychology
- Hearing Loss, Sensorineural/rehabilitation
- Humans
- Male
- Middle Aged
- Perceptual Masking
- Persons With Hearing Impairments/psychology
- Persons With Hearing Impairments/rehabilitation
- Pitch Discrimination
- Psychoacoustics
- Signal Processing, Computer-Assisted
- Sound Localization
- Speech Perception
Collapse
Affiliation(s)
- Andrew King
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Kathryn Hopkins
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Christopher J Plack
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | | | - Lars Bramsløw
- Eriksholm Research Centre, Oticon A/S, Rørtangvej 20, Snekkersten, Denmark
| | - Renskje K Hietkamp
- Eriksholm Research Centre, Oticon A/S, Rørtangvej 20, Snekkersten, Denmark
| | - Marianna Vatti
- Eriksholm Research Centre, Oticon A/S, Rørtangvej 20, Snekkersten, Denmark
| | - Atefeh Hafez
- Eriksholm Research Centre, Oticon A/S, Rørtangvej 20, Snekkersten, Denmark
| |
Collapse
|