1
|
Cosper SH, Männel C, Mueller JL. Auditory associative word learning in adults: The effects of musical experience and stimulus ordering. Brain Cogn 2024; 180:106207. [PMID: 39053199 DOI: 10.1016/j.bandc.2024.106207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 06/18/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
Evidence for sequential associative word learning in the auditory domain has been identified in infants, while adults have shown difficulties. To better understand which factors may facilitate adult auditory associative word learning, we assessed the role of auditory expertise as a learner-related property and stimulus order as a stimulus-related manipulation in the association of auditory objects and novel labels. We tested in the first experiment auditorily-trained musicians versus athletes (high-level control group) and in the second experiment stimulus ordering, contrasting object-label versus label-object presentation. Learning was evaluated from Event-Related Potentials (ERPs) during training and subsequent testing phases using a cluster-based permutation approach, as well as accuracy-judgement responses during test. Results revealed for musicians a late positive component in the ERP during testing, but neither an N400 (400-800 ms) nor behavioral effects were found at test, while athletes did not show any effect of learning. Moreover, the object-label-ordering group only exhibited emerging association effects during training, while the label-object-ordering group showed a trend-level late ERP effect (800-1200 ms) during test as well as above chance accuracy-judgement scores. Thus, our results suggest the learner-related property of auditory expertise and stimulus-related manipulation of stimulus ordering modulate auditory associative word learning in adults.
Collapse
Affiliation(s)
- Samuel H Cosper
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Claudia Männel
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Berlin, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jutta L Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Fantoni M, Federici A, Camponogara I, Handjaras G, Martinelli A, Bednaya E, Ricciardi E, Pavani F, Bottari D. The impact of face masks on face-to-face neural tracking of speech: Auditory and visual obstacles. Heliyon 2024; 10:e34860. [PMID: 39157360 PMCID: PMC11328033 DOI: 10.1016/j.heliyon.2024.e34860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 07/17/2024] [Accepted: 07/17/2024] [Indexed: 08/20/2024] Open
Abstract
Face masks provide fundamental protection against the transmission of respiratory viruses but hamper communication. We estimated auditory and visual obstacles generated by face masks on communication by measuring the neural tracking of speech. To this end, we recorded the EEG while participants were exposed to naturalistic audio-visual speech, embedded in 5-talker noise, in three contexts: (i) no-mask (audio-visual information was fully available), (ii) virtual mask (occluded lips, but intact audio), and (iii) real mask (occluded lips and degraded audio). Neural tracking of lip movements and of the sound envelope of speech was measured through backward modeling, that is, by reconstructing stimulus properties from neural activity. Behaviorally, face masks increased perceived listening difficulty and phonological errors in speech content retrieval. At the neural level, we observed that the occlusion of the mouth abolished lip tracking and dampened neural tracking of the speech envelope at the earliest processing stages. By contrast, degraded acoustic information related to face mask filtering altered neural tracking of speech envelope at later processing stages. Finally, a consistent link emerged between the increment of perceived listening difficulty and the drop in reconstruction performance of speech envelope when attending to a speaker wearing a face mask. Results clearly dissociated the visual and auditory impact of face masks on the neural tracking of speech. While the visual obstacle related to face masks hampered the ability to predict and integrate audio-visual speech, the auditory filter generated by face masks impacted neural processing stages typically associated with auditory selective attention. The link between perceived difficulty and neural tracking drop also provides evidence of the impact of face masks on the metacognitive levels subtending face-to-face communication.
Collapse
Affiliation(s)
- M. Fantoni
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - A. Federici
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - G. Handjaras
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - E. Bednaya
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - E. Ricciardi
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - F. Pavani
- Centro Interdipartimentale Mente/Cervello–CIMEC, University of Trento, Italy
- Centro Interuniversitario di Ricerca “Cognizione Linguaggio e Sordità”–CIRCLeS, University of Trento, Italy
| | - D. Bottari
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
3
|
Bhatt IS, Garay JAR, Bhagavan SG, Ingalls V, Dias R, Torkamani A. A genome-wide association study reveals a polygenic architecture of speech-in-noise deficits in individuals with self-reported normal hearing. Sci Rep 2024; 14:13089. [PMID: 38849415 PMCID: PMC11161523 DOI: 10.1038/s41598-024-63972-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 06/04/2024] [Indexed: 06/09/2024] Open
Abstract
Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.
Collapse
Affiliation(s)
- Ishan Sunilkumar Bhatt
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA.
| | - Juan Antonio Raygoza Garay
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
- Holden Comprehensive Cancer Center, University of Iowa, Iowa City, IA, 52242, USA
| | - Srividya Grama Bhagavan
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Valerie Ingalls
- Department of Communication Sciences and Disorders, University of Iowa, 250 Hawkins Dr, Iowa City, IA, 52242, USA
| | - Raquel Dias
- Department of Microbiology and Cell Science, University of Florida, Gainesville, FL, 32608, USA
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Research Institute, La Jolla, CA, 92037, USA
| |
Collapse
|
4
|
Smith ED, Holt LL, Dick F. A one-man bilingual cocktail party: linguistic and non-linguistic effects on bilinguals' speech recognition in Mandarin and English. Cogn Res Princ Implic 2024; 9:35. [PMID: 38834918 DOI: 10.1186/s41235-024-00562-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/19/2024] [Indexed: 06/06/2024] Open
Abstract
Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners' language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a "release from masking" from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a "one-man bilingual cocktail party" selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin-English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin-English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the "cocktail party" paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The "one-man bilingual cocktail party" establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin-English bilinguals.
Collapse
Affiliation(s)
- Erin D Smith
- Department of Psychology, Carnegie Mellon University, Pittsburgh, USA
| | - Lori L Holt
- College of Liberal Arts, Department of Psychology, The University of Texas at Austin, Sarah M. & Charles E. Seay Building, 108 E Dean Keeton St, Austin, TX, 78712, USA.
| | - Frederic Dick
- Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
5
|
Creff G, Lambert C, Coudert P, Pean V, Laurent S, Godey B. Comparison of Tonotopic and Default Frequency Fitting for Speech Understanding in Noise in New Cochlear Implantees: A Prospective, Randomized, Double-Blind, Cross-Over Study. Ear Hear 2024; 45:35-52. [PMID: 37823850 DOI: 10.1097/aud.0000000000001423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
OBJECTIVES While cochlear implants (CIs) have provided benefits for speech recognition in quiet for subjects with severe-to-profound hearing loss, speech recognition in noise remains challenging. A body of evidence suggests that reducing frequency-to-place mismatch may positively affect speech perception. Thus, a fitting method based on a tonotopic map may improve speech perception results in quiet and noise. The aim of our study was to assess the impact of a tonotopic map on speech perception in noise and quiet in new CI users. DESIGN A prospective, randomized, double-blind, two-period cross-over study in 26 new CI users was performed over a 6-month period. New CI users older than 18 years with bilateral severe-to-profound sensorineural hearing loss or complete hearing loss for less than 5 years were selected in the University Hospital Centre of Rennes in France. An anatomical tonotopic map was created using postoperative flat-panel computed tomography and a reconstruction software based on the Greenwood function. Each participant was randomized to receive a conventional map followed by a tonotopic map or vice versa. Each setting was maintained for 6 weeks, at the end of which participants performed speech perception tasks. The primary outcome measure was speech recognition in noise. Participants were allocated to sequences by block randomization of size two with a ratio 1:1 (CONSORT Guidelines). Participants and those assessing the outcomes were blinded to the intervention. RESULTS Thirteen participants were randomized to each sequence. Two of the 26 participants recruited (one in each sequence) had to be excluded due to the COVID-19 pandemic. Twenty-four participants were analyzed. Speech recognition in noise was significantly better with the tonotopic fitting at all signal-to-noise ratio (SNR) levels tested [SNR = +9 dB, p = 0.002, mean effect (ME) = 12.1%, 95% confidence interval (95% CI) = 4.9 to 19.2, standardized effect size (SES) = 0.71; SNR = +6 dB, p < 0.001, ME = 16.3%, 95% CI = 9.8 to 22.7, SES = 1.07; SNR = +3 dB, p < 0.001 ME = 13.8%, 95% CI = 6.9 to 20.6, SES = 0.84; SNR = 0 dB, p = 0.003, ME = 10.8%, 95% CI = 4.1 to 17.6, SES = 0.68]. Neither period nor interaction effects were observed for any signal level. Speech recognition in quiet ( p = 0.66) and tonal audiometry ( p = 0.203) did not significantly differ between the two settings. 92% of the participants kept the tonotopy-based map after the study period. No correlation was found between speech-in-noise perception and age, duration of hearing deprivation, angular insertion depth, or position or width of the frequency filters allocated to the electrodes. CONCLUSION For new CI users, tonotopic fitting appears to be more efficient than the default frequency fitting because it allows for better speech recognition in noise without compromising understanding in quiet.
Collapse
Affiliation(s)
- Gwenaelle Creff
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
| | - Cassandre Lambert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | - Paul Coudert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | | | | | - Benoit Godey
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
- Hearing Aid Academy, Javene, France
| |
Collapse
|
6
|
Mohammadi Y, Graversen C, Østergaard J, Andersen OK, Reichenbach T. Phase-locking of Neural Activity to the Envelope of Speech in the Delta Frequency Band Reflects Differences between Word Lists and Sentences. J Cogn Neurosci 2023; 35:1301-1311. [PMID: 37379482 DOI: 10.1162/jocn_a_02016] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
The envelope of a speech signal is tracked by neural activity in the cerebral cortex. The cortical tracking occurs mainly in two frequency bands, theta (4-8 Hz) and delta (1-4 Hz). Tracking in the faster theta band has been mostly associated with lower-level acoustic processing, such as the parsing of syllables, whereas the slower tracking in the delta band relates to higher-level linguistic information of words and word sequences. However, much regarding the more specific association between cortical tracking and acoustic as well as linguistic processing remains to be uncovered. Here, we recorded EEG responses to both meaningful sentences and random word lists in different levels of signal-to-noise ratios (SNRs) that lead to different levels of speech comprehension as well as listening effort. We then related the neural signals to the acoustic stimuli by computing the phase-locking value (PLV) between the EEG recordings and the speech envelope. We found that the PLV in the delta band increases with increasing SNR for sentences but not for the random word lists, showing that the PLV in this frequency band reflects linguistic information. When attempting to disentangle the effects of SNR, speech comprehension, and listening effort, we observed a trend that the PLV in the delta band might reflect listening effort rather than the other two variables, although the effect was not statistically significant. In summary, our study shows that the PLV in the delta band reflects linguistic information and might be related to listening effort.
Collapse
|
7
|
Xu S, Fan J, Zhang H, Zhang M, Zhao H, Jiang X, Ding H, Zhang Y. Hearing Assistive Technology Facilitates Sentence-in-Noise Recognition in Chinese Children With Autism Spectrum Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-21. [PMID: 37418749 DOI: 10.1044/2023_jslhr-22-00589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/09/2023]
Abstract
PURPOSE Hearing assistive technology (HAT) has been shown to be a viable solution to the speech-in-noise perception (SPIN) issue in children with autism spectrum disorder (ASD); however, little is known about its efficacy in tonal language speakers. This study compared sentence-level SPIN performance between Chinese children with ASD and neurotypical (NT) children and evaluated HAT use in improving SPIN performance and easing SPIN difficulty. METHOD Children with ASD (n = 26) and NT children (n = 19) aged 6-12 years performed two adaptive tests in steady-state noise and three fixed-level tests in quiet and steady-state noise with and without using HAT. Speech recognition thresholds (SRTs) and accuracy rates were assessed using adaptive and fixed-level tests, respectively. Parents or teachers of the ASD group completed a questionnaire regarding children's listening difficulty under six circumstances before and after a 10-day trial period of HAT use. RESULTS Although the two groups of children had comparable SRTs, the ASD group showed a significantly lower SPIN accuracy rate than the NT group. Also, a significant impact of noise was found in the ASD group's accuracy rate but not in that of the NT group. There was a general improvement in the ASD group's SPIN performance with HAT and a decrease in their listening difficulty ratings across all conditions after the device trial. CONCLUSIONS The findings indicated inadequate SPIN in the ASD group using a relatively sensitive measure to gauge SPIN performance among children. The markedly increased accuracy rate in noise during HAT-on sessions for the ASD group confirmed the feasibility of HAT for improving SPIN performance in controlled laboratory settings, and the reduced post-use ratings of listening difficulty further confirmed the benefits of HAT use in daily scenarios.
Collapse
Affiliation(s)
- Suyun Xu
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Juan Fan
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Hua Zhang
- Department of Child and Adolescent Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, China
| | - Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hang Zhao
- Faculty of Education, East China Normal University, Shanghai
| | - Xiaoming Jiang
- Institute of Linguistics, Shanghai International Studies University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
8
|
Makov S, Pinto D, Har-Shai Yahav P, Miller LM, Zion Golumbic E. "Unattended, distracting or irrelevant": Theoretical implications of terminological choices in auditory selective attention research. Cognition 2023; 231:105313. [PMID: 36344304 DOI: 10.1016/j.cognition.2022.105313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/30/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022]
Abstract
For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.
Collapse
Affiliation(s)
- Shiri Makov
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Danna Pinto
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Paz Har-Shai Yahav
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Lee M Miller
- The Center for Mind and Brain, University of California, Davis, CA, United States of America; Department of Neurobiology, Physiology, & Behavior, University of California, Davis, CA, United States of America; Department of Otolaryngology / Head and Neck Surgery, University of California, Davis, CA, United States of America
| | - Elana Zion Golumbic
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel.
| |
Collapse
|
9
|
Idiopathic sudden sensorineural hearing loss: A critique on corticosteroid therapy. Hear Res 2022; 422:108565. [PMID: 35816890 DOI: 10.1016/j.heares.2022.108565] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 06/10/2022] [Accepted: 06/25/2022] [Indexed: 11/22/2022]
Abstract
Idiopathic sudden sensorineural hearing loss (ISSNHL) is a condition affecting 5-30 per 100,000 individuals with the potential to significantly reduce one's quality of life. The true incidence of this condition is not known because it often goes undiagnosed and/or recovers within a few days. ISSNHL is defined as a ≥30 dB loss of hearing over 3 consecutive audiometric octaves within 3 days with no known cause. The disorder is typically unilateral and most of the cases spontaneously recover to functional hearing within 30 days. High frequency losses, ageing, and vertigo are associated with a poorer prognosis. Multiple causes of ISSNHL have been postulated and the most common are vascular obstruction, viral infection, or labyrinthine membrane breaks. Corticosteroids are the standard treatment option but this practice is not without opposition. Post mortem analyses of temporal bones of ISSNHL cases have been inconclusive. This report analyzed ISSNHL studies administering corticosteroids that met strict inclusion criteria and identified a number of methodologic shortcomings that compromise the interpretation of results. We discuss the issues and conclude that the data do not support present treatment practices. The current status on ISSNHL calls for a multi-institutional, randomized, double-blind trial with validated outcome measures to provide science-based treatment guidance.
Collapse
|