1
|
Tepe V, Guillory L, Boudin-George A, Cantelmo T, Murphy S. Central Auditory Processing Dysfunction in Service Members and Veterans: Treatment Considerations and Strategies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-28. [PMID: 37379242 DOI: 10.1044/2023_jslhr-23-00095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Military risk factors such as blast exposure, noise exposure, head trauma, and neurotoxin exposure place Service members and Veterans at risk for deficits associated with auditory processing dysfunction. However, there is no clinical guidance specific to the treatment of auditory processing deficits in this unique population. We provide an overview of available treatments and their limited supporting evidence for use in adults, emphasizing the need for multidisciplinary case management and interdisciplinary research to support evidence-based solutions. METHOD We explored relevant literature to inform the treatment of auditory processing dysfunction in adults, with emphasis on findings involving active or former military personnel. We were able to identify a limited number of studies, pertaining primarily to the treatment of auditory processing deficits through the use of assistive technologies and training strategies. We assessed the current state of the science for knowledge gaps that warrant additional study. CONCLUSIONS Auditory processing deficits often co-occur with other military injuries and may pose significant risk in military operational and occupational settings. Research is needed to advance clinical diagnostic and rehabilitative capabilities, guide treatment planning, support effective multidisciplinary management, and inform fitness-for-duty standards. We emphasize the need for an inclusive approach to the assessment and treatment of auditory processing concerns in Service members and Veterans and for evidence-based solutions to address complex military risk factors and injuries.
Collapse
Affiliation(s)
- Victoria Tepe
- Department of Defense Hearing Center of Excellence, JBSA Lackland, TX
- The Geneva Foundation, Tacoma, WA
| | - Lisa Guillory
- Harry S. Truman Memorial Veterans' Hospital, Columbia, MO
| | - Amy Boudin-George
- Department of Defense Hearing Center of Excellence, JBSA Lackland, TX
| | - Tasha Cantelmo
- Alexander T. Augusta Military Medical Center, Fort Belvoir, VA
| | - Sara Murphy
- Department of Defense Hearing Center of Excellence, JBSA Lackland, TX
- The Geneva Foundation, Tacoma, WA
| |
Collapse
|
2
|
Iva P, Martin R, Fielding J, Clough M, White O, Godic B, van der Walt A, Rajan R. Discriminating spatialised speech in complex environments in multiple sclerosis. Cortex 2023; 159:217-232. [PMID: 36640621 DOI: 10.1016/j.cortex.2022.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 06/13/2022] [Accepted: 11/09/2022] [Indexed: 12/25/2022]
Abstract
People with multiple sclerosis (pwMS) frequently present with deficits in binaural processing used for sound localization. This study examined spatial release from speech-on-speech masking in pwMS, which involves binaural processing and additional higher level mechanisms underlying streaming, such as spatial attention. 26 pwMS with mild severity (Expanded Disability Status Scale score <3) and 20 age-matched controls listened via headphones to pre-recorded sentences from a standard list presented simultaneously with eight-talker babble. Virtual acoustic techniques were used to simulate sentences originating from 0°, 20°, or 50° on the interaural horizontal plane around the listener whilst babble was presented continuously at 0° azimuth, and participants verbally repeated the target sentence. In a separate task, two simultaneous sentences both containing a colour and number were presented, and participants were required to report the target colour and number. Both competing sentences could originate from 0°, 20°, or 50° on the azimuthal plane. Participants also completed a series of neuropsychological assessments, an auditory questionnaire, and a three-alternative forced-choice task that involved the detection of interaural time differences (ITDs) in noise bursts. Spatial release from masking was observed in both pwMS and controls, as response accuracy in the two speech discrimination tasks improved in the spatially separated conditions (20° and 50°) compared with the co-localised condition. However, pwMS demonstrated significantly less spatial release (18%) than controls (28%) when discriminating colour/number coordinates. At 50° separation, pwMS discriminated significantly fewer coordinates (77%) than controls (89%). In contrast, pwMS had similar performances to controls when sentences were presented in babble, and for the basic ITD discrimination task. Significant correlations between speech discrimination performance and standardized neuropsychological scores were observed across all spatial conditions. Our findings suggest that spatial hearing is likely to be implicated in pwMS, thereby affecting the perception of competing speech originating from various locations.
Collapse
Affiliation(s)
- Pippa Iva
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia.
| | - Russell Martin
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Joanne Fielding
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, VIC, Australia
| | - Meaghan Clough
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, VIC, Australia
| | - Owen White
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, VIC, Australia
| | - Branislava Godic
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Anneke van der Walt
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, VIC, Australia
| | - Ramesh Rajan
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
3
|
Wang H, Chen R, Yan Y, McGettigan C, Rosen S, Adank P. Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends Hear 2023; 27:23312165231192297. [PMID: 37547940 PMCID: PMC10408355 DOI: 10.1177/23312165231192297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 08/08/2023] Open
Abstract
Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.
Collapse
Affiliation(s)
- Han Wang
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Rongru Chen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Yu Yan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
4
|
Taitelbaum-Swead R, Fostick L. The Effect of Age, Type of Noise, and Cochlear Implants on Adaptive Sentence-in-Noise Task. J Clin Med 2022; 11:jcm11195872. [PMID: 36233739 PMCID: PMC9571224 DOI: 10.3390/jcm11195872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 11/16/2022] Open
Abstract
Adaptive tests of sentences in noise mimic the challenge of daily listening situations. The aims of the present study were to validate an adaptive version of the HeBio sentence test on normal hearing (NH) adults; to evaluate the effect of age and type of noise on speech reception threshold in noise (SRTn); and to test it on prelingual adults with cochlear implants (CI). In Experiment 1, 45 NH young adults listened to two lists accompanied by four-talker babble noise (4TBN). Experiment 2 presented the sentences amidst 4TBN or speech-shaped noise (SSN) to 80 participants in four age groups. In Experiment 3, 18 CI adult users with prelingual bilateral profound hearing loss performed the test amidst SSN, along with HeBio sentences and monosyllabic words in quiet and forward digits span. The main findings were as follows: SRTn for NH participants was normally distributed and had high test–retest reliability; SRTn was lower among adolescents and young adults than middle-aged and older adults, and were better for SSN than 4TBN; SRTn for CI users was higher and more variant than for NH and correlated with speech perception tests in quiet, digits span, and age at first CI. This suggests that the adaptive HeBio can be implemented in clinical and research settings with various populations.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Ariel 4077625, Israel
- Medical Division, Meuhedet Health Services, Tel Aviv 6203854, Israel
- Correspondence:
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel 4077625, Israel
| |
Collapse
|
5
|
Encoding speech rate in challenging listening conditions: White noise and reverberation. Atten Percept Psychophys 2022; 84:2303-2318. [PMID: 35996057 PMCID: PMC9481500 DOI: 10.3758/s13414-022-02554-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 11/08/2022]
Abstract
Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often referred to as "rate-dependent speech perception," has been suggested to be the result of a robust, low-level perceptual process, typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening conditions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations, eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study contributes towards understanding the consequences of different types of listening environments on the functioning of low-level perceptual processes that listeners use during speech perception.
Collapse
|
6
|
Iva P, Fielding J, Clough M, White O, Godic B, Martin R, Rajan R. Speech Discrimination Tasks: A Sensitive Sensory and Cognitive Measure in Early and Mild Multiple Sclerosis. Front Neurosci 2021; 14:604991. [PMID: 33424540 PMCID: PMC7786116 DOI: 10.3389/fnins.2020.604991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
There is a need for reliable and objective measures of early and mild symptomology in multiple sclerosis (MS), as deficits can be subtle and difficult to quantify objectively in patients without overt physical deficits. We hypothesized that a speech-in-noise (SiN) task would be sensitive to demyelinating effects on precise neural timing and diffuse higher-level networks required for speech intelligibility, and therefore be a useful tool for monitoring sensory and cognitive changes in early MS. The objective of this study was to develop a SiN task for clinical use that sensitively monitors disease activity in early (<5 years) and late (>10 years) stages of MS subjects with mild severity [Expanded Disability Status Scale (EDSS) score < 3]. Pre-recorded Bamford-Kowal-Bench sentences and isolated keywords were presented at five signal-to-noise ratios (SNR) in one of two background noises: speech-weighted noise and eight-talker babble. All speech and noise were presented via headphones to controls (n = 38), early MS (n = 23), and late MS (n = 12) who were required to verbally repeat the target speech. MS subjects also completed extensive neuropsychological testing which included: Paced Auditory Serial Addition Test, Digit Span Test, and California Verbal Learning Test. Despite normal hearing thresholds, subjects with early and late mild MS displayed speech discrimination deficits when sentences and words were presented in babble - but not speech-weighted noise. Significant correlations between SiN performance and standardized neuropsychological assessments indicated that MS subjects with lower functional scores also had poorer speech discrimination. Furthermore, a quick 5-min task with words and keywords presented in multi-talker babble at an SNR of -1 dB was 82% accurate in discriminating mildly impaired MS individuals (median EDSS = 0) from healthy controls. Quantifying functional deficits in mild MS will help clinicians to maximize the opportunities to preserve neurological reserve in patients with appropriate therapeutic management, particularly in the earliest stages. Given that physical assessments are not informative in this fully ambulatory cohort, a quick 5-min task with words and keywords presented in multi-talker babble at a single SNR could serve as a complementary test for clinical use due to its ease of use and speed.
Collapse
Affiliation(s)
- Pippa Iva
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Joanne Fielding
- Department of Neuroscience, Central Clinical School, Monash University, Alfred Centre, Melbourne, VIC, Australia
| | - Meaghan Clough
- Department of Neuroscience, Central Clinical School, Monash University, Alfred Centre, Melbourne, VIC, Australia
| | - Owen White
- Department of Neuroscience, Central Clinical School, Monash University, Alfred Centre, Melbourne, VIC, Australia
| | - Branislava Godic
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Russell Martin
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| | - Ramesh Rajan
- Department of Physiology, Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
7
|
Abstract
OBJECTIVE Acoustic distortions to the speech signal impair spoken language recognition, but healthy listeners exhibit adaptive plasticity consistent with rapid adjustments in how the distorted speech input maps to speech representations, perhaps through engagement of supervised error-driven learning. This puts adaptive plasticity in speech perception in an interesting position with regard to developmental dyslexia inasmuch as dyslexia impacts speech processing and may involve dysfunction in neurobiological systems hypothesized to be involved in adaptive plasticity. METHOD Here, we examined typical young adult listeners (N = 17), and those with dyslexia (N = 16), as they reported the identity of native-language monosyllabic spoken words to which signal processing had been applied to create a systematic acoustic distortion. During training, all participants experienced incremental signal distortion increases to mildly distorted speech along with orthographic and auditory feedback indicating word identity following response across a brief, 250-trial training block. During pretest and posttest phases, no feedback was provided to participants. RESULTS Word recognition across severely distorted speech was poor at pretest and equivalent across groups. Training led to improved word recognition for the most severely distorted speech at posttest, with evidence that adaptive plasticity generalized to support recognition of new tokens not previously experienced under distortion. However, training-related recognition gains for listeners with dyslexia were significantly less robust than for control listeners. CONCLUSIONS Less efficient adaptive plasticity to speech distortions may impact the ability of individuals with dyslexia to deal with variability arising from sources like acoustic noise and foreign-accented speech.
Collapse
|
8
|
Iva P, Fielding J, Clough M, White O, Noffs G, Godic B, Martin R, van der Walt A, Rajan R. Speech discrimination performance in multiple sclerosis dataset. Data Brief 2020; 33:106614. [PMID: 33318987 PMCID: PMC7726651 DOI: 10.1016/j.dib.2020.106614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/27/2020] [Accepted: 11/27/2020] [Indexed: 11/18/2022] Open
Abstract
The most complex interactions between human beings occur through speech, and often in the presence of background noise. Understanding speech in noisy environments requires the integrity of highly integrated and widespread auditory networks likely to be impacted by multiple sclerosis (MS) related neurogenic injury. Despite the impact auditory communication has on a person's ability to navigate the world, build relationships, and maintain employability; studies of speech-in-noise (SiN) perception in people with MS (pwMS) have been minimal to date. Thus, this paper presents a dataset related to the acquisition of pure-tone thresholds, SiN performance and questionnaire responses in age-matched controls and pwMS. Bilateral pure-tone hearing thresholds were obtained at frequencies of 250 hertz (Hz), 500 Hz, 750 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 4000 Hz, 6000 Hz and 8000 Hz, and hearing thresholds were defined as the lowest level at which the tone was perceived 50% of the time. Thresholds at 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz were used to calculate the four-tone average for each participant, and only those with a bilateral four tone average of ≤ 25 dB HL were included in the analysis. To investigate SiN performance in pwMS, pre-recorded Bamford-Kowal-Bench (BKB) sentences were presented binaurally through headphones at five signal-to-noise ratios (SNR) in two noise conditions: speech-weighted noise and multi-talker babble. Participants were required to verbally repeat each sentence they had just heard; or indicate their inability to do so. A 33-item questionnaire, based on validated inventories for specific adult clinical populations with abnormal auditory processing, was used to evaluate auditory processing in daily life for pwMS. For analysis, pwMS were grouped according to their Expanded Disability Status Scale (EDSS) score as rated by a neurologist. PwMS with EDSS scores ≤ 1.5 were classified as ‘mild’ (n = 20); between 2 and 4.5 as ‘moderate’ (n = 16) and between 5 and 7 as ‘advanced’ (n = 10) and were compared to neurologically healthy controls (n = 38). The outcomes of the SiN task conducted in pwMS can be found in Iva et al., (2021). The present data has important implications for the timing and delivery of preparatory education to patients, family, and caregivers about communication abilities in pwMS. This dataset will also be valuable for the reuse/reanalysis required for future investigations into the clinical utility of SiN tasks to monitor disease progression.
Collapse
Affiliation(s)
- Pippa Iva
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
- Corresponding author.
| | - Joanne Fielding
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Meaghan Clough
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Owen White
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Gustavo Noffs
- Centre for Neuroscience of Speech, University of Melbourne, Melbourne, Australia
| | - Branislava Godic
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| | - Russell Martin
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| | - Anneke van der Walt
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| | - Ramesh Rajan
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
9
|
Iva P, Fielding J, Clough M, White O, Noffs G, Godic B, Martin R, van der Walt A, Rajan R. Speech discrimination impairments as a marker of disease severity in multiple sclerosis. Mult Scler Relat Disord 2020; 47:102608. [PMID: 33189020 DOI: 10.1016/j.msard.2020.102608] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 10/17/2020] [Accepted: 10/29/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Multiple Sclerosis (MS) pathology is likely to disrupt central auditory pathways, thereby affecting an individual's ability to discriminate speech from noise. Despite the importance of speech discrimination in daily communication, it's characterization in the context of MS remains limited. This cross-sectional study evaluated speech discrimination in MS under "real world" conditions where sentences were presented in ecologically valid multi-talker speech or broadband noise at several signal-to-noise ratios (SNRs). METHODS Pre-recorded Bamford-Kowal-Bench sentences were presented at five signal-to-noise ratios (SNR) in one of two background noises: speech-weighted noise and eight-talker babble. All auditory stimuli were presented via headphones to control (n = 38) and MS listeners with mild (n = 20), moderate (n = 16) and advanced (n = 10) disability. Disability was quantified by the Kurtzke Expanded Disability Status Scale (EDSS) and scored by a neurologist. All participants passed a routine audiometric examination. RESULTS Despite normal hearing, MS psychometric discrimination curves which model the relationship between signal-to-noise ratio (SNR) and sentence discrimination accuracy in speech-weighted noise and babble did not change in slope (sentences/dB) but shifted to higher SNRs (dB) compared to controls. The magnitude of the shift in the curve systematically increased with greater disability. Furthermore, mixed-effects models identified EDSS score as the most significant predictor of speech discrimination in noise (odds ratio = 0.81; p < 0.001). Neither age, sex, disease phenotype or disease duration were significantly associated with speech discrimination performance in noise. Only MS listeners with advanced disability self-reported audio-attentional difficulty in a questionnaire designed to reflect auditory processing behaviours in daily life. CONCLUSION Speech discrimination performance worsened systematically with greater disability, independent of age, sex, education, disease duration or disease phenotype. These results identify novel auditory processing deficits in MS and highlight that speech discrimination tasks may provide a viable non-invasive and sensitive means for disease monitoring in MS.
Collapse
Affiliation(s)
- Pippa Iva
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia.
| | - Joanne Fielding
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Meaghan Clough
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Owen White
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Gustavo Noffs
- Centre for Neuroscience of Speech, University of Melbourne, Melbourne, Australia
| | - Branislava Godic
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| | - Russell Martin
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| | - Anneke van der Walt
- Department of Neurosciences, Central Clinical School, Alfred Hospital, Monash University, Melbourne, Australia
| | - Ramesh Rajan
- Neuroscience Discovery Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
10
|
Willberg T, Sivonen V, Hurme S, Aarnisalo AA, Löppönen H, Dietz A. The long-term learning effect related to the repeated use of the Finnish matrix sentence test and the Finnish digit triplet test. Int J Audiol 2020; 59:753-762. [PMID: 32338546 DOI: 10.1080/14992027.2020.1753893] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objectives: To assess are there learning-related improvements in the speech reception thresholds (SRTs) for the Finnish matrix sentence test (FMST) and the Finnish digit triplet test (FDTT) in repeated use over 12 months.Design: Test sessions were scheduled at 0, 1, 3, 6 and 12 months, and each session included five FMST measurements and four FDTT measurements. The within-session and inter-session improvements in SRTs were analysed with a linear mixed model.Study sample: Fifteen young normal-hearing participants.Results: Statistically significant mean improvements of 2.0 dB SNR and 1.2 dB SNR were detected for the FMST and the FDTT, respectively, over the 12-month follow-up period. For the FMST, majority of the improvement occurred during the first two test sessions. For the FDTT, statistically significant differences were detected only in comparison to the first test session and to the first test measurement of every session over the 12-month follow-up.Conclusions: Repeated use of the FMST led to significant learning-related improvements, but the improvements appeared to plateau by the third test session. For the FDTT, the overall improvements were smaller, but a significant within-session difference between the first and consecutive FDTT measurements persisted throughout the test sessions.
Collapse
Affiliation(s)
- Tytti Willberg
- Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.,Department of Otorhinolaryngology, Turku University Hospital, Turku, Finland
| | - Ville Sivonen
- Department of Otorhinolaryngology, Helsinki University Hospital, Helsinki, Finland
| | - Saija Hurme
- Department of Biostatistics, University of Turku, Turku, Finland
| | - Antti A Aarnisalo
- Department of Otorhinolaryngology, Helsinki University Hospital, Helsinki, Finland
| | - Heikki Löppönen
- Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.,Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
11
|
Kennedy-Higgins D, Devlin JT, Adank P. Cognitive mechanisms underpinning successful perception of different speech distortions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2728. [PMID: 32359293 DOI: 10.1121/10.0001160] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception. Eighty-eight participants completed a speeded sentence-verification task with increases in accuracy and reductions in response times used to indicate performance. Audiological and cognitive task measures include pure tone audiometry, speech recognition threshold, working memory, vocabulary knowledge, attention switching, and pattern analysis. Despite previous studies suggesting that temporal and spectral/environmental perception require different lexical or phonological mechanisms, this study shows significant positive correlations in accuracy and response time performance across all distortions. Results of a principal component analysis and multiple linear regressions suggest that a component based on vocabulary knowledge and working memory predicted performance in the speech in quiet, time-compressed and speech in noise conditions. These results suggest that listeners employ a similar cognitive strategy to perceive different temporal and spectral/environmental speech distortions and that this mechanism is supported by vocabulary knowledge and working memory.
Collapse
Affiliation(s)
- Dan Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, United Kingdom
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, United Kingdom
| |
Collapse
|
12
|
Pitchaimuthu A, Arora A, Bhat JS, Kanagokar V. Effect of systematic desensitization training on acceptable noise levels in adults with normal hearing sensitivity. Noise Health 2018; 20:83-89. [PMID: 29785973 PMCID: PMC5965005 DOI: 10.4103/nah.nah_58_17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Context: The willingness of a person to accept noise while listening to speech can be measured using the acceptable noise level (ANL) test. Individuals with poor ANL are unlikely to become successful hearing aid users. Hence, it is important to enhance the individual’s ability to accept noise levels. The current study was an attempt to investigate whether systematic desensitization training can improve the ANL in individuals having high ANL. Aims: To investigate the effect of systematic desensitization training on ANLs in individuals with normal hearing sensitivity. Settings and Design: Observational study design. Materials and Methods: Thirty-eight normally hearing adults within the age range of 18–25 years participated in the study. Initially, baseline ANL was measured for all participants. Based on the baseline ANL, participants were categorized into three groups; low ANL, mid ANL, and high ANL. The participants with high ANL were trained using systematic desensitization procedure whereas, individuals with low and mid ANL did not undergo any training and served as the comparison groups. After the training period, ANL was measured again for all the participants. Statistical Analysis Used: Repeated measures of analysis of variance with follow up paired "t" test. Results: Analysis revealed a significant main effect of systematic desensitization training on ANL. There was a significant improvement in ANL in participants with high ANL. However, there was no significant difference in ANL between baseline and follow-up session in individuals with low and mid ANL. Conclusions: Systematic desensitization training can facilitate ANL, thereby enhancing the individual’s ability to accept the noise levels. This enhanced ANL can facilitate better hearing aid fitting and acceptance.
Collapse
Affiliation(s)
- Arivudainambi Pitchaimuthu
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| | - Anshul Arora
- Advanced Behavioural Learning Environment (ABLE UK), Dubai Healthcare City, Dubai, UAE
| | - Jayashree S Bhat
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| | - Vibha Kanagokar
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| |
Collapse
|
13
|
Kowalewski V, Patterson R, Hartos J, Bugnariu N. Hearing Loss Contributes to Balance Difficulties in both Younger and Older Adults. ACTA ACUST UNITED AC 2018; 3. [PMID: 29951645 PMCID: PMC6017998 DOI: 10.21767/2572-5483.100033] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Objective The number of steps required to regain balance is an easily obtainable clinical outcome measure. This study assessed whether number of steps during loss of balance could identify older adults with hearing loss who have balance deficits. We aimed to answer two questions: 1) Does hearing loss negatively affect the ability to regain balance, as reflected by an increased number of steps needed to respond to a perturbation while simultaneously attending to speech-in-noise; and 2) Do hearing aids improve balance control, reflected by a decrease in number of steps needed to regain balance? Methods 20 young adults and 20 older adults with normal hearing, and 19 older adults with hearing loss performed an auditory-balance dual-task. Participants were asked to listen and repeat back sentences from a standardized audiology test, while simultaneously responding to backward surface translations. Outcome measures were performed on the auditory test and number of steps needed to regain balance. Repeated measures ANCOVA models were run in using group, time, hearing levels, and perturbation levels as predictors. Results Auditory scores confirmed difficulty hearing speech-in-noise in older adults with hearing loss and no hearing aids, and in young and older adults with normal hearing and simulated hearing loss. Results showed that group, auditory and balance conditions are significantly related to both outcomes measures and time is not significant for steps. Older adults with hearing loss had a significant increase in number of steps needed to regain balance compared to young adults and older adults with normal hearing. Conclusion Number of steps may be an appropriate clinical assessment tool for identifying fall risk in older adults with hearing loss. Further research needs to be performed to identify proper assessments and treatment interventions for older adults with hearing loss who have balance deficits.
Collapse
|
14
|
Yang X, Jiang M, Zhao Y. Effects of Noise on English Listening Comprehension among Chinese College Students with Different Learning Styles. Front Psychol 2017; 8:1764. [PMID: 29085317 PMCID: PMC5650695 DOI: 10.3389/fpsyg.2017.01764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 09/25/2017] [Indexed: 11/13/2022] Open
Abstract
This study was intended to determine whether the effects of noise on English listening comprehension would vary among Chinese college students with different learning styles. A total of 89 participants with different learning styles measured using Kolb’s (1985) Learning Style Inventory finished English listening comprehension tests in quiet and in white noise, Chinese two-talker babble, and English two-talker babble respectively. The results showed that the participants in general had significantly poorer performance in the two babble conditions than in quiet and white noise. However, the participants with assimilative and divergent learning styles performed relatively better in Chinese babble, and exhibited stable performance across the three noisy conditions, while the participants with convergent and accommodative learning styles had more impaired performance in both Chinese babble and English babble than in white noise. Moreover, of Kolb’s four learning modes, reflective observation had a facilitative effect on listening performance in Chinese babble and English babble. These findings suggest that differences in learning style might lead to differential performance in foreign language listening comprehension in noise.
Collapse
Affiliation(s)
- Xiaohu Yang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Meng Jiang
- Language & Brain Research Center, Sichuan International Studies University, Chongqing, China
| | - Yong Zhao
- Department of Translation and Interpreting, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
15
|
Malagón N, Risso A. Discriminación auditiva en entornos de ruido, en personas que usan auriculares de forma habitual || Perception of phonemes in noise environments, in people who usually use headphones. REVISTA DE ESTUDIOS E INVESTIGACIÓN EN PSICOLOGÍA Y EDUCACIÓN 2017. [DOI: 10.17979/reipe.2017.4.1.2213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
La finalidad de este trabajo era conocer cómo la utilización diaria y prolongada de auriculares, así como el ruido de fondo, afectan a la comprensión del habla. Existe evidencia en la literatura de que el uso habitual de auriculares puede traducirse en una pérdida de audición en la frecuencia de 3000 Hz, y también son conocidas las dificultades de comprensión que surgen cuando hay un elevado nivel de ruido ambiental. Por ello, en esta investigación se quiso contar con una muestra de personas que prácticamente no han sido objeto de estudio pero que por su ocupación llevan cierto tiempo bajo los efectos de ambos factores. Así, se contó con la colaboración como grupo experimental de 24 teleoperadoras y de otras 20 personas no relacionadas con la profesión como grupo control equivalente. Para la tarea de evaluación se empleó una versión reducida y en español del Speech Perception in Noise Test. La tarea consistía en repetir la última palabra de una serie de frases pregrabadas con distintos sonidos de fondo, en las que había distintos fonemas y diferente predictibilidad de las palabras. Los resultados obtenidos mostraron que la utilización habitual de auriculares tiene un efecto negativo sobre la percepción auditiva de ciertas frecuencias y que el ruido de call center afecta negativamente a la compresión del habla, incluso más que el de tráfico. Estos hallazgos son de aplicación tanto en lo laboral como en lo educativo y prueban la importancia de sensibilizar a la población sobre el uso adecuado de auriculares.
Collapse
|
16
|
Zaballos MTP, Plasencia DP, González MLZ, de Miguel AR, Macías ÁR. Air traffic controllers' long-term speech-in-noise training effects: A control group study. Noise Health 2016; 18:376-381. [PMID: 27991470 PMCID: PMC5227019 DOI: 10.4103/1463-1741.195804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Introduction: Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. Subjects and Methods: 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and −5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Results: Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=−5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. Discussion: ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Conclusion: Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions.
Collapse
Affiliation(s)
- Maria T P Zaballos
- Laboratorio de Psicoacústica, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Daniel P Plasencia
- ENT Department & Departamento de CC Quirúrgicas, Universidad de Las Palmas de Gran Canaria, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - María L Z González
- ENT Department, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Angel R de Miguel
- Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas, Universidad de Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Ángel R Macías
- ENT Department & Departamento de CC Quirúrgicas, Universidad de Las Palmas de Gran Canaria, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| |
Collapse
|
17
|
Dunlop WA, Enticott PG, Rajan R. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity. Front Hum Neurosci 2016; 10:401. [PMID: 27555814 PMCID: PMC4977299 DOI: 10.3389/fnhum.2016.00401] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Accepted: 07/26/2016] [Indexed: 01/06/2023] Open
Abstract
Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants.
Collapse
Affiliation(s)
- William A. Dunlop
- Neuroscience Program Biomedicine Discovery Institute, Department of Physiology, Monash UniversityMelbourne, VIC, Australia
| | - Peter G. Enticott
- Cognitive Neuroscience Unit, School of Psychology, Deakin UniversityGeelong, VIC, Australia
- Monash Alfred Psychiatry Research Centre, Monash UniversityMelbourne, VIC, Australia
| | - Ramesh Rajan
- Neuroscience Program Biomedicine Discovery Institute, Department of Physiology, Monash UniversityMelbourne, VIC, Australia
- Ear Sciences Institute of AustraliaPerth, WA, Australia
| |
Collapse
|
18
|
Karawani H, Bitan T, Attias J, Banai K. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss. Front Psychol 2016; 6:2066. [PMID: 26869944 PMCID: PMC4737899 DOI: 10.3389/fpsyg.2015.02066] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/31/2015] [Indexed: 11/13/2022] Open
Abstract
Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60-72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to untrained conditions. We suggest that most training-related changes occurred at higher level task-specific cognitive processes in both groups. However, these were enhanced by high quality perceptual representations in the normal-hearing group. In contrast, some training-related changes have also occurred at the level of phonemic representations in the ARHL group, consistent with an interaction between bottom-up and top-down processes.
Collapse
Affiliation(s)
- Hanin Karawani
- The Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa Haifa, Israel
| | - Tali Bitan
- The Department of Psychology, Faculty of Social Sciences, University of Haifa Haifa, Israel
| | - Joseph Attias
- The Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa Haifa, Israel
| | - Karen Banai
- The Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa Haifa, Israel
| |
Collapse
|
19
|
Guediche S, Blumstein SE, Fiez JA, Holt LL. Speech perception under adverse conditions: insights from behavioral, computational, and neuroscience research. Front Syst Neurosci 2014; 7:126. [PMID: 24427119 PMCID: PMC3879477 DOI: 10.3389/fnsys.2013.00126] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Accepted: 12/16/2013] [Indexed: 01/06/2023] Open
Abstract
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech.
Collapse
Affiliation(s)
- Sara Guediche
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
| | - Sheila E. Blumstein
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brain Institute, Brown UniversityProvidence, RI, USA
| | - Julie A. Fiez
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
| | - Lori L. Holt
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
- Department of Psychology, Carnegie Mellon UniversityPittsburgh, PA, USA
| |
Collapse
|
20
|
Age-related laterality shifts in auditory and attention networks with normal ageing: Effects on a working memory task. ACTA ACUST UNITED AC 2013. [DOI: 10.1016/j.npbr.2013.09.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
21
|
Early and Late Shift of Brain Laterality in STG, HG, and Cerebellum with Normal Aging during a Short-Term Memory Task. ISRN NEUROLOGY 2013; 2013:892072. [PMID: 23533806 PMCID: PMC3600174 DOI: 10.1155/2013/892072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2012] [Accepted: 01/10/2013] [Indexed: 11/30/2022]
Abstract
Evidence suggests that cognitive performance deteriorates in noisy backgrounds and the problems are more pronounced in older people due to brain deficits and changes. The present study used functional MRI (fMRI) to investigate the neural correlates of this phenomenon during short-term memory using a forward repeat task performed in quiet (STMQ) and in noise: 5-dB SNR (STMN) on four groups of participants of different ages. The performance of short-term memory tasks was measured behaviourally. No significant difference was found across age groups in STMQ. However, older adults (50–65 year olds) performed relatively poorly on the STMN. fMRI results on the laterality index indicate changes in hemispheric laterality in the superior temporal gyrus (STG), Heschl's gyrus (HG), and cerebellum, and a leftward asymmetry in younger participants which changes to a more rightward asymmetry in older participants. The results also indicate that the onset of the laterality shift varies from one brain region to another. STG and HG show a late shift while the cerebellum shows an earlier shift. The results also reveal that noise influences this shifting. Finally, the results support the hypothesis that functional networks that underlie STG, HG, and cerebellum undergo reorganization to compensate for the neural deficit/cognitive decline.
Collapse
|
22
|
Mann C, Canny BJ, Reser DH, Rajan R. Poorer verbal working memory for a second language selectively impacts academic achievement in university medical students. PeerJ 2013; 1:e22. [PMID: 23638357 PMCID: PMC3628612 DOI: 10.7717/peerj.22] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2012] [Accepted: 01/08/2013] [Indexed: 11/20/2022] Open
Abstract
Working memory (WM) is often poorer for a second language (L2). In low noise conditions, people listening to a language other than their first language (L1) may have similar auditory perception skills for that L2 as native listeners, but do worse in high noise conditions, and this has been attributed to the poorer WM for L2. Given that WM is critical for academic success in children and young adults, these speech in noise effects have implications for academic performance where the language of instruction is L2 for a student. We used a well-established Speech-in-Noise task as a verbal WM (vWM) test, and developed a model correlating vWM and measures of English proficiency and/or usage to scholastic outcomes in a multi-faceted assessment medical education program. Significant differences in Speech-Noise Ratio (SNR50) values were observed between medical undergraduates who had learned English before or after five years of age, with the latter group doing worse in the ability to extract whole connected speech in the presence of background multi-talker babble (Student-t tests, p < 0.001). Significant negative correlations were observed between the SNR50 and seven of the nine variables of English usage, learning styles, stress, and musical abilities in a questionnaire administered to the students previously. The remaining two variables, Perceived Stress Scale (PSS) and the Age of Acquisition of English (AoAoE) were significantly positively correlated with the SNR50, showing that those with a poorer capacity to discriminate simple English sentences from noise had learnt English later in life and had higher levels of stress – all characteristics of the international students. Local students exhibited significantly lower SNR50 scores and were significantly younger when they first learnt English. No significant correlation was detected between the SNR50 and the students’ Visual/Verbal Learning Style (r = −0.023). Standard multiple regression was carried out to assess the relationship between language proficiency and verbal working memory (SNR50) using 5 variables of L2 proficiency, with the results showing that the variance in SNR50 was significantly predicted by this model (r2 = 0.335). Hierarchical multiple regression was then used to test the ability of three independent variable measures (SNR50, age of acquisition of English and English proficiency) to predict academic performance as the dependent variable in a factor analysis model which predicted significant performance differences in an assessment requiring communications skills (p = 0.008), but not on a companion assessment requiring knowledge of procedural skills, or other assessments requiring factual knowledge. Thus, impaired vWM for an L2 appears to affect specific communications-based assessments in university medical students.
Collapse
Affiliation(s)
- Collette Mann
- Department of Physiology, Faculty of Medicine Nursing and Health Sciences , Monash University , Clayton, VIC , Australia
| | | | | | | |
Collapse
|
23
|
Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cereb Cortex 2011; 22:1180-90. [PMID: 21799207 DOI: 10.1093/cercor/bhr196] [Citation(s) in RCA: 153] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
We investigated training-related improvements in listening in noise and the biological mechanisms mediating these improvements. Training-related malleability was examined using a program that incorporates cognitively based listening exercises to improve speech-in-noise perception. Before and after training, auditory brainstem responses to a speech syllable were recorded in quiet and multitalker noise from adults who ranged in their speech-in-noise perceptual ability. Controls did not undergo training but were tested at intervals equivalent to the trained subjects. Trained subjects exhibited significant improvements in speech-in-noise perception that were retained 6 months later. Subcortical responses in noise demonstrated training-related enhancements in the encoding of pitch-related cues (the fundamental frequency and the second harmonic), particularly for the time-varying portion of the syllable that is most vulnerable to perceptual disruption (the formant transition region). Subjects with the largest strength of pitch encoding at pretest showed the greatest perceptual improvement. Controls exhibited neither neurophysiological nor perceptual changes. We provide the first demonstration that short-term training can improve the neural representation of cues important for speech-in-noise perception. These results implicate and delineate biological mechanisms contributing to learning success, and they provide a conceptual advance to our understanding of the kind of training experiences that can influence sensory processing in adulthood.
Collapse
Affiliation(s)
- Judy H Song
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | | | | | | |
Collapse
|
24
|
|
25
|
Taft D, Grayden D, Burkitt A. Across-Frequency Delays Based on the Cochlear Traveling Wave: Enhanced Speech Presentation for Cochlear Implants. IEEE Trans Biomed Eng 2010; 57:596-606. [DOI: 10.1109/tbme.2009.2034014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Ageing without hearing loss or cognitive impairment causes a decrease in speech intelligibility only in informational maskers. Neuroscience 2008; 154:784-95. [DOI: 10.1016/j.neuroscience.2008.03.067] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2008] [Revised: 03/25/2008] [Accepted: 03/25/2008] [Indexed: 11/21/2022]
|