1
|
Beckers L, Philips B, Huinck W, Mylanus E, Büchner A, Kral A. Auditory working memory in noise in cochlear implant users: Insights from behavioural and neuronal measures. Hear Res 2025; 456:109167. [PMID: 39719815 DOI: 10.1016/j.heares.2024.109167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 11/30/2024] [Accepted: 12/11/2024] [Indexed: 12/26/2024]
Abstract
OBJECTIVE We investigated auditory working-memory using behavioural measures and electroencephalography (EEG) in adult Cochlear Implant (CI) users with varying degrees of CI performance. METHODS 24 adult CI listeners (age: M = 61.38, SD = 12.45) performed the Sternberg auditory-digit-in-working-memory task during which EEG, accuracy, and promptness were captured. Participants were presented with 2, 4, or 6 digits at Signal-to-Noise Ratios (SNR) of 0, +5 and +10dB. They had to identify a probe stimulus as present in the preceding sequence. ANOVA models were used to compare conditions. RESULTS ANOVA revealed that increasing memory load (ML) led to decreased task performance and CI performance interacted with ML and SNR. Centro-parietal alpha power increased during memory encoding but did not differ between conditions. Frontal alpha power was positively correlated with accuracy in conditions most affected by SNR (r = 0.57, r = 0.52) and theta power in conditions most affected by ML (r = 0.55, r = 0.57). CONCLUSIONS While parietal alpha power is modulated by the task, it is frontal alpha that relates quantitatively to sensory aspects of processing (noise) and frontal theta to memory load in this group of CI listeners. SIGNIFICANCE These results suggest that alpha and theta show distinct relationships to behaviour, providing additional insight into neurocognitive (auditory working-memory) processes in CI users.
Collapse
Affiliation(s)
- Loes Beckers
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud university medical center, Geert Grooteplein Zuid 10, 6525 GA Nijmegen, The Netherlands; Cochlear Ltd., Schaliënhoevedreef 20 Building i, B-2800 Mechelen, Belgium.
| | - Birgit Philips
- Cochlear Ltd., Schaliënhoevedreef 20 Building i, B-2800 Mechelen, Belgium.
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud university medical center, Geert Grooteplein Zuid 10, 6525 GA Nijmegen, The Netherlands.
| | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud university medical center, Geert Grooteplein Zuid 10, 6525 GA Nijmegen, The Netherlands.
| | - Andreas Büchner
- Clinics of Otolaryngology, Hannover Medical School, Hearing Center Hannover (DHZ), Karl-Wiechert-Allee 3, 30625 Hannover, Germany.
| | - Andrej Kral
- Clinics of Otolaryngology, Hannover Medical School, Hearing Center Hannover (DHZ), Karl-Wiechert-Allee 3, 30625 Hannover, Germany; Institute of AudioNeuroTechnology (VIANNA) & Dept. of Experimental Otology, Hannover Medical School, Stadtfelddamm 34, 30625 Hannover, Germany.
| |
Collapse
|
2
|
Moberly AC, Du L, Tamati TN. Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. Trends Hear 2025; 29:23312165241312449. [PMID: 39819389 PMCID: PMC11742172 DOI: 10.1177/23312165241312449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 12/02/2024] [Accepted: 12/12/2024] [Indexed: 01/19/2025] Open
Abstract
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Liping Du
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Terrin N. Tamati
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
3
|
Amini AE, Naples JG, Cortina L, Hwa T, Morcos M, Castellanos I, Moberly AC. A Scoping Review and Meta-Analysis of the Relations Between Cognition and Cochlear Implant Outcomes and the Effect of Quiet Versus Noise Testing Conditions. Ear Hear 2024; 45:1339-1352. [PMID: 38953851 PMCID: PMC11493527 DOI: 10.1097/aud.0000000000001527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
OBJECTIVES Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition. DESIGN A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs. Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance. RESULTS Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition ( r = +0.37, p < 0.01) as well as Verbal Fluency ( r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet ( r = +0.30, p = 0.18), and noise ( r = -0.06, p = 0.78). CONCLUSIONS Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining speech recognition skills following cochlear implantation. Future work should continue to evaluate these relations to appropriately unify cognitive testing opportunities in the setting of cochlear implantation.
Collapse
Affiliation(s)
- Andrew E Amini
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - James G Naples
- Division of Otolaryngology-Head and Neck Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - Luis Cortina
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Tiffany Hwa
- Division of Otology, Neurotology, & Lateral Skull Base Surgery, Department of Otolaryngology-Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Mary Morcos
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
4
|
Stronks HC, Tops AL, Quach KW, Briaire JJ, Frijns JHM. Listening Effort Measured With Pupillometry in Cochlear Implant Users Depends on Sound Level, But Not on the Signal to Noise Ratio When Using the Matrix Test. Ear Hear 2024; 45:1461-1473. [PMID: 38886888 PMCID: PMC11486951 DOI: 10.1097/aud.0000000000001529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/28/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES We investigated whether listening effort is dependent on task difficulty for cochlear implant (CI) users when using the Matrix speech-in-noise test. To this end, we measured peak pupil dilation (PPD) at a wide range of signal to noise ratios (SNR) by systematically changing the noise level at a constant speech level, and vice versa. DESIGN A group of mostly elderly CI users performed the Dutch/Flemish Matrix test in quiet and in multitalker babble at different SNRs. SNRs were set relative to the speech-recognition threshold (SRT), namely at SRT, and 5 and 10 dB above SRT (0 dB, +5 dB, and +10 dB re SRT). The latter 2 conditions were obtained by either varying speech level (at a fixed noise level of 60 dBA) or by varying noise level (with a fixed speech level). We compared these PPDs with those of a group of typical hearing (TH) listeners. In addition, listening effort was assessed with subjective ratings on a Likert scale. RESULTS PPD for the CI group did not significantly depend on SNR, whereas SNR significantly affected PPDs for TH listeners. Subjective effort ratings depended significantly on SNR for both groups. For CI users, PPDs were significantly larger, and effort was rated higher when speech was varied, and noise was fixed for CI users. By contrast, for TH listeners effort ratings were significantly higher and performance scores lower when noise was varied, and speech was fixed. CONCLUSIONS The lack of a significant effect of varying SNR on PPD suggests that the Matrix test may not be a feasible speech test for measuring listening effort with pupillometric measures for CI users. A rating test appeared more promising in this population, corroborating earlier reports that subjective measures may reflect different dimensions of listening effort than pupil dilation. Establishing the SNR by varying speech or noise level can have subtle, but significant effects on measures of listening effort, and these effects can differ between TH listeners and CI users.
Collapse
Affiliation(s)
- Hendrik Christiaan Stronks
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, Leiden, the Netherlands
| | - Annemijn Laura Tops
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Kwong Wing Quach
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen Johannes Briaire
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Johan Hubertus Maria Frijns
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, Leiden, the Netherlands
- Department of Bioelectronics, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
5
|
McMurray B, Smith FX, Huffman M, Rooff K, Muegge JB, Jeppsen C, Kutlu E, Colby S. Underlying dimensions of real-time word recognition in cochlear implant users. Nat Commun 2024; 15:7382. [PMID: 39209837 PMCID: PMC11362525 DOI: 10.1038/s41467-024-51514-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/08/2024] [Indexed: 09/04/2024] Open
Abstract
Word recognition is a gateway to language, linking sound to meaning. Prior work has characterized its cognitive mechanisms as a form of competition between similar-sounding words. However, it has not identified dimensions along which this competition varies across people. We sought to identify these dimensions in a population of cochlear implant users with heterogenous backgrounds and audiological profiles, and in a lifespan sample of people without hearing loss. Our study characterizes the process of lexical competition using the Visual World Paradigm. A principal component analysis reveals that people's ability to resolve lexical competition varies along three dimensions that mirror prior small-scale studies. These dimensions capture the degree to which lexical access is delayed ("Wait-and-See"), the degree to which competition fully resolves ("Sustained-Activation"), and the overall rate of activation. Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience). Moreover, each dimension predicts outcomes (speech perception in quiet and noise, subjective listening success) over and above auditory fidelity. Higher degrees of Wait-and-See and Sustained-Activation predict poorer outcomes. These results suggest the mechanisms of word recognition vary along a few underlying dimensions which help explain variable performance among listeners encountering auditory challenge.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA.
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA.
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA.
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA.
| | - Francis X Smith
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA
| | - Marissa Huffman
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - Kristin Rooff
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - John B Muegge
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Charlotte Jeppsen
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Ethan Kutlu
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA
| | - Sarah Colby
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
6
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
7
|
Bosen AK. Characterizing correlations in partial credit speech recognition scoring with beta-binomial distributions. JASA EXPRESS LETTERS 2024; 4:025202. [PMID: 38299983 PMCID: PMC10848658 DOI: 10.1121/10.0024633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/12/2024] [Indexed: 02/02/2024]
Abstract
Partial credit scoring for speech recognition tasks can improve measurement precision. However, assessing the magnitude of this improvement with partial credit scoring is challenging because meaningful speech contains contextual cues, which create correlations between the probabilities of correctly identifying each token in a stimulus. Here, beta-binomial distributions were used to estimate recognition accuracy and intraclass correlation for phonemes in words and words in sentences in listeners with cochlear implants (N = 20). Estimates demonstrated substantial intraclass correlation in recognition accuracy within stimuli. These correlations were invariant across individuals. Intraclass correlations should be addressed in power analysis of partial credit scoring.
Collapse
Affiliation(s)
- Adam K Bosen
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131,
| |
Collapse
|
8
|
Camarena A, Ardis M, Fujioka T, Fitzgerald MB, Goldsworthy RL. The Relationship of Pitch Discrimination with Segregation of Tonal and Speech Streams for Cochlear Implant Users. Trends Hear 2024; 28:23312165241305049. [PMID: 39668613 PMCID: PMC11639003 DOI: 10.1177/23312165241305049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 11/15/2024] [Accepted: 11/19/2024] [Indexed: 12/14/2024] Open
Abstract
Cochlear implant (CI) users often complain about music appreciation and speech recognition in background noise, which depend on segregating sound sources into perceptual streams. The present study examined relationships between frequency and fundamental frequency (F0) discrimination with stream segregation of tonal and speech streams for CI users and peers with no known hearing loss. Frequency and F0 discrimination were measured for 1,000 Hz pure tones and 110 Hz complex tones, respectively. Stream segregation was measured for pure and complex tones using a lead/lag delay detection task. Spondee word identification was measured in competing speech with high levels of informational masking that required listeners to use F0 to segregate speech. The hypotheses were that frequency and F0 discrimination would explain a significant portion of the variance in outcomes for tonal segregation and speech reception. On average, CI users received a large benefit for stream segregation of tonal streams when either the frequency or F0 of the competing stream was shifted relative to the target stream. A linear relationship accounted for 42% of the covariance between measures of stream segregation and complex tone discrimination for CI users. In contrast, such benefits were absent when the F0 of the competing speech was shifted relative to the target speech. The large benefit observed for tonal streams is promising for music listening if it transfers to separating instruments within a song; however, the lack of benefit for speech suggests separate mechanisms, or special requirements, for speech processing.
Collapse
Affiliation(s)
- Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology — Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Matthew Ardis
- Auditory Research Center, Caruso Department of Otolaryngology — Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Takako Fujioka
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, California, USA
| | - Matthew B. Fitzgerald
- Stanford Ear Institute, Department of Otolaryngology – Head & Neck Surgery, Stanford University School of Medicine, Stanford University, Stanford, California, USA
| | - Raymond L. Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology — Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
9
|
Everhardt MK, Jung DE, Stiensma B, Lowie W, Başkent D, Sarampalis A. Foreign Language Acquisition in Adolescent Cochlear Implant Users. Ear Hear 2024; 45:174-185. [PMID: 37747307 PMCID: PMC10718217 DOI: 10.1097/aud.0000000000001410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 06/20/2023] [Indexed: 09/26/2023]
Abstract
OBJECTIVES This study explores to what degree adolescent cochlear implant (CI) users can learn a foreign language in a school setting similar to their normal-hearing (NH) peers despite the degraded auditory input. DESIGN A group of native Dutch adolescent CI users (age range 13 to 17 years) learning English as a foreign language at secondary school and a group of NH controls (age range 12 to 15 years) were assessed on their Dutch and English language skills using various language tasks that either relied on the processing of auditory information (i.e., listening task) or on the processing of orthographic information (i.e., reading and/or gap-fill task). The test battery also included various auditory and cognitive tasks to assess whether the auditory and cognitive functioning of the learners could explain the potential variation in language skills. RESULTS Results showed that adolescent CI users can learn English as a foreign language, as the English language skills of the CI users and their NH peers were comparable when assessed with reading or gap-fill tasks. However, the performance of the adolescent CI users was lower for English listening tasks. This discrepancy between task performance was not observed in their native language Dutch. The auditory tasks confirmed that the adolescent CI users had coarser temporal and spectral resolution than their NH peers, supporting the notion that the difference in foreign language listening skills may be due to a difference in auditory functioning. No differences in the cognitive functioning of the CI users and their NH peers were found that could explain the variation in the foreign language listening tasks. CONCLUSIONS In short, acquiring a foreign language with degraded auditory input appears to affect foreign language listening skills, yet does not appear to impact foreign language skills when assessed with tasks that rely on the processing of orthographic information. CI users could take advantage of orthographic information to facilitate foreign language acquisition and potentially support the development of listening-based foreign language skills.
Collapse
Affiliation(s)
- Marita K. Everhardt
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Dorit Enja Jung
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Berrit Stiensma
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| |
Collapse
|
10
|
Amini AE, Naples JG, Hwa T, Larrow DC, Campbell FM, Qiu M, Castellanos I, Moberly AC. Emerging Relations among Cognitive Constructs and Cochlear Implant Outcomes: A Systematic Review and Meta-Analysis. Otolaryngol Head Neck Surg 2023; 169:792-810. [PMID: 37365967 DOI: 10.1002/ohn.344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 03/07/2023] [Accepted: 03/25/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE Hearing loss has a detrimental impact on cognitive function. However, there is a lack of consensus on the impact of cochlear implants on cognition. This review systematically evaluates whether cochlear implants in adult patients lead to cognitive improvements and investigates the relations of cognition with speech recognition outcomes. DATA SOURCES A literature review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies evaluating cognition and cochlear implant outcomes in postlingual, adult patients from January 1996 to December 2021 were included. Of 2510 total references, 52 studies were included in qualitative analysis and 11 in meta-analyses. REVIEW METHODS Proportions were extracted from studies of (1) the significant impacts of cochlear implantation on 6 cognitive domains and (2) associations between cognition and speech recognition outcomes. Meta-analyses were performed using random effects models on mean differences between pre- and postoperative performance on 4 cognitive assessments. RESULTS Only half of the outcomes reported suggested cochlear implantation had a significant impact on cognition (50.8%), with the highest proportion in assessments of memory & learning and inhibition-concentration. Meta-analyses revealed significant improvements in global cognition and inhibition-concentration. Finally, 40.4% of associations between cognition and speech recognition outcomes were significant. CONCLUSION Findings relating to cochlear implantation and cognition vary depending on the cognitive domain assessed and the study goal. Nonetheless, assessments of memory & learning, global cognition, and inhibition-concentration may represent tools to assess cognitive benefit after implantation and help explain variability in speech recognition outcomes. Enhanced selectivity in assessments of cognition is needed for clinical applicability.
Collapse
Affiliation(s)
- Andrew E Amini
- Harvard Medical School, Boston, Massachusetts, USA
- Division of Otolaryngology-Head and Neck Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - James G Naples
- Harvard Medical School, Boston, Massachusetts, USA
- Division of Otolaryngology-Head and Neck Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Tiffany Hwa
- Division of Otology, Neurotology, and Lateral Skull Base Surgery, Department of Otolaryngology-Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Danielle C Larrow
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts, USA
| | - Frank M Campbell
- Biotech Commons, Johnson Pavilion, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Maylene Qiu
- Biotech Commons, Johnson Pavilion, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
11
|
Kallioinen P, Olofsson JK, von Mentzer CN. Semantic processing in children with Cochlear Implants: A review of current N400 studies and recommendations for future research. Biol Psychol 2023; 182:108655. [PMID: 37541539 DOI: 10.1016/j.biopsycho.2023.108655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023]
Abstract
Deaf and hard of hearing children with cochlear implants (CI) often display impaired spoken language skills. While a large number of studies investigated brain responses to sounds in this population, relatively few focused on semantic processing. Here we summarize and discuss findings in four studies of the N400, a cortical response that reflects semantic processing, in children with CI. A study with auditory target stimuli found N400 effects at delayed latencies at 12 months after implantation, but at 18 and 24 months after implantation effects had typical latencies. In studies with visual target stimuli N400 effects were larger than or similar to controls in children with CI, despite lower semantic abilities. We propose that in children with CI, the observed large N400 effect reflects a stronger reliance on top-down predictions, relative to bottom-up language processing. Recent behavioral studies of children and adults with CI suggest that top-down processing is a common compensatory strategy, but with distinct limitations such as being effortful. A majority of the studies have small sample sizes (N < 20), and only responses to image targets were studied repeatedly in similar paradigms. This precludes strong conclusions. We give suggestions for future research and ways to overcome the scarcity of participants, including extending research to children with conventional hearing aids, an understudied group.
Collapse
Affiliation(s)
- Petter Kallioinen
- Department of Linguistics, Stockholm University, Stockholm, Sweden; Lund University Cognitive Science, Lund University, Lund, Sweden.
| | - Jonas K Olofsson
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | |
Collapse
|
12
|
Sud P, Munjal SK, Panda N. Challenges faced by Indian parents in raising a child with a cochlear implant - Impact on communication outcomes. Int J Pediatr Otorhinolaryngol 2023; 172:111695. [PMID: 37567086 DOI: 10.1016/j.ijporl.2023.111695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/30/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023]
Abstract
OBJECTIVES The objectives of the present study were to understand the parental views regarding stress, and its affect language, and auditory outcomes. The study also aims to understand the relationship between parental stress, and child's age. DESIGN, SETTING AND PARTICIPANTS A retrospective study was performed at a tertiary medical hospital. 50 parents of cochlear implant recipients were recruited for the study. The parents were interviewed and the children were tested using a test battery. The average age of implantation was 4.29 years, and the average hearing age was 3.23 years. MAIN OUTCOME MEASURES The parents were interviewed about their child's needs, and experience with the cochlear implant using the Strength, and Difficulty questionnaire in Hindi, Questionnaire on Resources, and Stress-Short Form, and the Family Environment Scale, Closed - format Questionnaire to understand parental views, and experiences. The language outcomes were studied using the Integrated Scales of Development (ISD), Revised Categories of Auditory Performance. Factor analysis, and Chi-square tests were performed to understand potential relationships between parental stress, and child language, and/or auditory outcomes. RESULTS The results provide five main factors that accounted for significant variance including financial stress (30.1%), hyperactivity (15.2%), lack of personal rewards (13%), peer problems (10.9%), and emotional problems (9.2%). Acquisition of language was highly influenced by stress and caregiver's 'lack of personal rewards. 'Financial stress', and 'hyperactive behavior' of the child significantly affected the receptive language acquisition of a HI child. The most concerning factors for parents were well-being, and happiness (0.885), followed by social relationships (0.830), communication (0.736), the process of implantation (0.695), and the decision to implant (0.681). The stress regarding finance among parents increased marginally (0.024) as the child's age progressed. CONCLUSION Parental stress is ongoing. The impact on the expressive language development of the child is significant. The maximum concern of parents is regarding the financial aspects of a cochlear implant, and the lifespan care of their child. Hence, professionals should provide regular, and context-specific counseling to parents after implantation to understand the parents' concerns, and provide appropriate remediation.
Collapse
Affiliation(s)
- Parul Sud
- Department of Otolaryngology, Post Graduate Institute of Medical Education and Research, Chandigarh, India.
| | - Sanjay Kumar Munjal
- Department of Otolaryngology, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Naresh Panda
- Department of Otolaryngology, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| |
Collapse
|
13
|
DeFreese AJ, Lindquist NR, Shi L, Holder JT, Berg KA, Haynes DS, Gifford RH. The Impact of Daily Processor Use on Adult Cochlear Implant Outcomes: Reexamining the Roles of Duration of Deafness and Age at Implantation. Otol Neurotol 2023; 44:672-678. [PMID: 37367733 PMCID: PMC10524754 DOI: 10.1097/mao.0000000000003920] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
OBJECTIVE To quantify the roles and relationships between age at implantation, duration of deafness (DoD), and daily processor use via data logging on speech recognition outcomes for postlingually deafened adults with cochlear implants. STUDY DESIGN Retrospective case review. SETTING Cochlear implant (CI) program at a tertiary medical center. PATIENTS Six-hundred fourteen postlingually deafened adult ears with CIs (mean age, 63 yr; 44% female) were included. MAIN OUTCOME MEASURES A stepwise multiple regression analysis was completed to investigate the combined effects of age, DoD, and daily processor use on CI-aided speech recognition (Consonant-Nucleus-Consonant monosyllables and AzBio sentences). RESULTS Results indicated that only daily processor use was significantly related to Consonant-Nucleus-Consonant word scores ( R2 = 0.194, p < 0.001) and AzBio in quiet scores ( R2 = 0.198, p < 0.001), whereas neither age nor DoD was significantly related. In addition, there was no significant relationship between daily processor use, age at implantation, or DoD and AzBio sentences in noise ( R2 = 0.026, p = 0.005). CONCLUSIONS Considering the clinical factors of age at implantation, DoD, and daily processor use, only daily processor use significantly predicted the ~20% of variance in postoperative outcomes (CI-aided speech recognition) accounted for by these clinical factors.
Collapse
Affiliation(s)
- Andrea J DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center
| | - Nathan R Lindquist
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Linjie Shi
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center
| | - Jourdan T Holder
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Katelyn A Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center
| | - David S Haynes
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center
| |
Collapse
|
14
|
Gundogdu O, Serbetcioglu MB, Kara E, Eser BN. Effects of Cognitive Functions on Speech Recognition in Noise in Cochlear Implant Recipients. ORL J Otorhinolaryngol Relat Spec 2023; 85:208-214. [PMID: 37331341 DOI: 10.1159/000530233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 03/08/2023] [Indexed: 06/20/2023]
Abstract
INTRODUCTION There are substantial differences in speech recognition performance of adult cochlear implant (CI) recipients. This study investigated the effects of cognitive function on speech recognition in CI recipients. METHODS The verbal working memory of 36 adults with unilateral CIs was tested using digit span tests. Attention and inhibition abilities were assessed by using the Stroop test (both congruent and incongruent tasks). Speech recognition in noise was measured using the Turkish matrix test. RESULTS A moderate negative correlation was observed between the critical signal-to-noise ratio obtained via speech recognition in noise test and the digit span test scores (backward and digit span total scores). There was no correlation between Stroop test scores and speech recognition in noise in CI recipients. CONCLUSION The findings indicated that verbal working memory correlated well with speech recognition outcomes in adult CI recipients and that higher working memory capacity led to better speech recognition performance in noise.
Collapse
Affiliation(s)
- Oğulcan Gundogdu
- Department of Audiology, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey
| | | | - Eyyup Kara
- Department of Audiology, Faculty Health Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Busra Nur Eser
- Department of Audiology, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
15
|
Schvartz-Leyzac KC, Giordani B, Pfingst BE. Association of Aging and Cognition With Complex Speech Understanding in Cochlear-Implanted Adults: Use of a Modified National Institutes of Health (NIH) Toolbox Cognitive Assessment. JAMA Otolaryngol Head Neck Surg 2023; 149:239-246. [PMID: 36701145 PMCID: PMC9880868 DOI: 10.1001/jamaoto.2022.4806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 12/01/2022] [Indexed: 01/27/2023]
Abstract
Importance The association between cognitive function and outcomes in cochlear implant (CI) users is not completely understood, partly because some cognitive tests are confounded by auditory status. It is important to determine appropriate cognitive tests to use in a cohort of CI recipients. Objective To provide proof-of-concept for using an adapted version of the National Institutes of Health (NIH) Toolbox Cognition Battery in a cohort of patients with CIs and to explore how hearing in noise with a CI is affected by cognitive status using the adapted test. Design, Setting, and Participants In this prognostic study, participants listened to sentences presented in a speech-shaped background noise. Cognitive tests consisted of 7 subtests of the NIH Toolbox Cognition Battery that were adapted for hearing impaired individuals by including written instructions and visual stimuli. Participants were prospectively recruited from and evaluated at a tertiary medical center. All participants had at least 6 months' experience with their CI. Main Outcomes and Measures The main outcomes were performance on the adapted cognitive test and a speech recognition in noise task. Results Participants were 20 adult perilingually or postlingually deafened CI users (50% male participants; median [range] age, 66 [26-80] years old). Performance on a sentence recognition in noise task was negatively associated with the chronological age of the listener (R2 = 0.29; β = 0.16; standard error, SE = 0.06; t = 2.63; 95% confidence interval, 0.03-0.27). Testing using the adapted version of the NIH Toolbox Cognition Battery revealed that a test of processing speed was also associated with performance, using a standardized score that accounted for contributions of other demographic factors (R2 = 0.28; 95% confidence interval, -0.42 to -0.05). Conclusions and Relevance In this prognostic study, older CI users showed poorer performance on a sentence-in-noise test compared with younger users. This poorer performance was correlated with a cognitive deficit in processing speed when cognitive function was assessed using a test battery adapted for participants with hearing loss. These results provide initial proof-of-concept results for using a standardized and adapted cognitive test battery in CI recipients.
Collapse
Affiliation(s)
- Kara C. Schvartz-Leyzac
- Kresge Hearing Research Institute, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
- Hearing Rehabilitation Center, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
- Medical University of South Carolina, Charleston
| | - Bruno Giordani
- Department of Psychiatry & Michigan Alzheimer’s Disease Center, University of Michigan Health Systems, Ann Arbor
| | - Bryan E. Pfingst
- Kresge Hearing Research Institute, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
| |
Collapse
|
16
|
Beckers L, Tromp N, Philips B, Mylanus E, Huinck W. Exploring neurocognitive factors and brain activation in adult cochlear implant recipients associated with speech perception outcomes-A scoping review. Front Neurosci 2023; 17:1046669. [PMID: 36816114 PMCID: PMC9932917 DOI: 10.3389/fnins.2023.1046669] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/05/2023] [Indexed: 02/05/2023] Open
Abstract
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W.
Collapse
Affiliation(s)
- Loes Beckers
- Cochlear Ltd., Mechelen, Belgium
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Nikki Tromp
- Cochlear Ltd., Mechelen, Belgium
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | | | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
17
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
18
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
19
|
O’Leary RM, Neukam J, Hansen TA, Kinney AJ, Capach N, Svirsky MA, Wingfield A. Strategic Pauses Relieve Listeners from the Effort of Listening to Fast Speech: Data Limited and Resource Limited Processes in Narrative Recall by Adult Users of Cochlear Implants. Trends Hear 2023; 27:23312165231203514. [PMID: 37941344 PMCID: PMC10637151 DOI: 10.1177/23312165231203514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/11/2023] [Accepted: 09/08/2023] [Indexed: 11/10/2023] Open
Abstract
Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.
Collapse
Affiliation(s)
- Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Jonathan Neukam
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | | | - Nicole Capach
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
20
|
Fleming JT, Winn MB. Strategic perceptual weighting of acoustic cues for word stress in listeners with cochlear implants, acoustic hearing, or simulated bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1300. [PMID: 36182279 PMCID: PMC9439712 DOI: 10.1121/10.0013890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 08/08/2022] [Accepted: 08/16/2022] [Indexed: 05/28/2023]
Abstract
Perception of word stress is an important aspect of recognizing speech, guiding the listener toward candidate words based on the perceived stress pattern. Cochlear implant (CI) signal processing is likely to disrupt some of the available cues for word stress, particularly vowel quality and pitch contour changes. In this study, we used a cue weighting paradigm to investigate differences in stress cue weighting patterns between participants listening with CIs and those with normal hearing (NH). We found that participants with CIs gave less weight to frequency-based pitch and vowel quality cues than NH listeners but compensated by upweighting vowel duration and intensity cues. Nonetheless, CI listeners' stress judgments were also significantly influenced by vowel quality and pitch, and they modulated their usage of these cues depending on the specific word pair in a manner similar to NH participants. In a series of separate online experiments with NH listeners, we simulated aspects of bimodal hearing by combining low-pass filtered speech with a vocoded signal. In these conditions, participants upweighted pitch and vowel quality cues relative to a fully vocoded control condition, suggesting that bimodal listening holds promise for restoring the stress cue weighting patterns exhibited by listeners with NH.
Collapse
Affiliation(s)
- Justin T Fleming
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
21
|
Zhou X, Feng M, Hu Y, Zhang C, Zhang Q, Luo X, Yuan W. The Effects of Cortical Reorganization and Applications of Functional Near-Infrared Spectroscopy in Deaf People and Cochlear Implant Users. Brain Sci 2022; 12:brainsci12091150. [PMID: 36138885 PMCID: PMC9496692 DOI: 10.3390/brainsci12091150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/19/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022] Open
Abstract
A cochlear implant (CI) is currently the only FDA-approved biomedical device that can restore hearing for the majority of patients with severe-to-profound sensorineural hearing loss (SNHL). While prelingually and postlingually deaf individuals benefit substantially from CI, the outcomes after implantation vary greatly. Numerous studies have attempted to study the variables that affect CI outcomes, including the personal characteristics of CI candidates, environmental variables, and device-related variables. Up to 80% of the results remained unexplainable because all these variables could only roughly predict auditory performance with a CI. Brain structure/function differences after hearing deprivation, that is, cortical reorganization, has gradually attracted the attention of neuroscientists. The cross-modal reorganization in the auditory cortex following deafness is thought to be a key factor in the success of CI. In recent years, the adaptive and maladaptive effects of this reorganization on CI rehabilitation have been argued because the neural mechanisms of how this reorganization impacts CI learning and rehabilitation have not been revealed. Due to the lack of brain processes describing how this plasticity affects CI learning and rehabilitation, the adaptive and deleterious consequences of this reorganization on CI outcomes have recently been the subject of debate. This review describes the evidence for different roles of cross-modal reorganization in CI performance and attempts to explore the possible reasons. Additionally, understanding the core influencing mechanism requires taking into account the cortical changes from deafness to hearing restoration. However, methodological issues have restricted longitudinal research on cortical function in CI. Functional near-infrared spectroscopy (fNIRS) has been increasingly used for the study of brain function and language assessment in CI because of its unique advantages, which are considered to have great potential. Here, we review studies on auditory cortex reorganization in deaf patients and CI recipients, and then we try to illustrate the feasibility of fNIRS as a neuroimaging tool in predicting and assessing speech performance in CI recipients. Here, we review research on the cross-modal reorganization of the auditory cortex in deaf patients and CI recipients and seek to demonstrate the viability of using fNIRS as a neuroimaging technique to predict and evaluate speech function in CI recipients.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Menglong Feng
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Yaqin Hu
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Chanyuan Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Qingling Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xiaoqin Luo
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Yuan
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
- Correspondence: ; Tel.: +86-23-63535180
| |
Collapse
|
22
|
Na Y, Joo H, Trang LT, Quan LDA, Woo J. Objective speech intelligibility prediction using a deep learning model with continuous speech-evoked cortical auditory responses. Front Neurosci 2022; 16:906616. [PMID: 36061597 PMCID: PMC9433707 DOI: 10.3389/fnins.2022.906616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 07/25/2022] [Indexed: 11/29/2022] Open
Abstract
Auditory prostheses provide an opportunity for rehabilitation of hearing-impaired patients. Speech intelligibility can be used to estimate the extent to which the auditory prosthesis improves the user’s speech comprehension. Although behavior-based speech intelligibility is the gold standard, precise evaluation is limited due to its subjectiveness. Here, we used a convolutional neural network to predict speech intelligibility from electroencephalography (EEG). Sixty-four–channel EEGs were recorded from 87 adult participants with normal hearing. Sentences spectrally degraded by a 2-, 3-, 4-, 5-, and 8-channel vocoder were used to set relatively low speech intelligibility conditions. A Korean sentence recognition test was used. The speech intelligibility scores were divided into 41 discrete levels ranging from 0 to 100%, with a step of 2.5%. Three scores, namely 30.0, 37.5, and 40.0%, were not collected. The speech features, i.e., the speech temporal envelope (ENV) and phoneme (PH) onset, were used to extract continuous-speech EEGs for speech intelligibility prediction. The deep learning model was trained by a dataset of event-related potentials (ERP), correlation coefficients between the ERPs and ENVs, between the ERPs and PH onset, or between ERPs and the product of the multiplication of PH and ENV (PHENV). The speech intelligibility prediction accuracies were 97.33% (ERP), 99.42% (ENV), 99.55% (PH), and 99.91% (PHENV). The models were interpreted using the occlusion sensitivity approach. While the ENV models’ informative electrodes were located in the occipital area, the informative electrodes of the phoneme models, i.e., PH and PHENV, were based on the occlusion sensitivity map located in the language processing area. Of the models tested, the PHENV model obtained the best speech intelligibility prediction accuracy. This model may promote clinical prediction of speech intelligibility with a comfort speech intelligibility test.
Collapse
Affiliation(s)
- Youngmin Na
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Hyosung Joo
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan, South Korea
| | - Le Thi Trang
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan, South Korea
| | - Luong Do Anh Quan
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan, South Korea
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
- Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan, South Korea
- *Correspondence: Jihwan Woo,
| |
Collapse
|
23
|
Skidmore J, Ramekers D, Colesa DJ, Schvartz-Leyzac KC, Pfingst BE, He S. A Broadly Applicable Method for Characterizing the Slope of the Electrically Evoked Compound Action Potential Amplitude Growth Function. Ear Hear 2022; 43:150-164. [PMID: 34241983 PMCID: PMC8674380 DOI: 10.1097/aud.0000000000001084] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Amplitudes of electrically evoked compound action potentials (eCAPs) as a function of the stimulation level constitute the eCAP amplitude growth function (AGF). The slope of the eCAP AGF (i.e., rate of growth of eCAP amplitude as a function of stimulation level), recorded from subjects with cochlear implants (CIs), has been widely used as an indicator of survival of cochlear nerve fibers. However, substantial variation in the approach used to calculate the slope of the eCAP AGF makes it difficult to compare results across studies. In this study, we developed an improved slope-fitting method by addressing the limitations of previously used approaches and ensuring its application for the estimation of the maximum slopes of the eCAP AGFs recorded in both animal models and human listeners with various etiologies. DESIGN The new eCAP AGF fitting method was designed based on sliding window linear regression. Slopes of the eCAP AGF estimated using this new fitting method were calculated and compared with those estimated using four other fitting methods reported in the literature. These four methods were nonlinear regression with a sigmoid function, linear regression, gradient calculation, and boxcar smoothing. The comparison was based on the fitting results of 72 eCAP AGFs recorded from 18 acutely implanted guinea pigs, 46 eCAP AGFs recorded from 23 chronically implanted guinea pigs, and 2094 eCAP AGFs recorded from 200 human CI users from 4 patient populations. The effect of the choice of input units of the eCAP AGF (linear versus logarithmic) on fitting results was also evaluated. RESULTS The slope of the eCAP AGF was significantly influenced by the slope-fitting method and by the choice of input units. Overall, slopes estimated using all five fitting methods reflected known patterns of neural survival in human patient populations and were significantly correlated with speech perception scores. However, slopes estimated using the newly developed method showed the highest correlation with spiral ganglion neuron density among all five fitting methods for animal models. In addition, this new method could reliably and accurately estimate the slope for 4 human patient populations, while the performance of the other methods was highly influenced by the morphology of the eCAP AGF. CONCLUSIONS The novel slope-fitting method presented in this study addressed the limitations of the other methods reported in the literature and successfully characterized the slope of the eCAP AGF for various animal models and CI patient populations. This method may be useful for researchers in conducting scientific studies and for clinicians in providing clinical care for CI users.
Collapse
Affiliation(s)
- Jeffrey Skidmore
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Dyan Ramekers
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht University, Room G.02.531, P.O. Box 85500, 3508 GA Utrecht, The Netherlands
- UMC Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Deborah J. Colesa
- Kresge Hearing Research Institute, Department of Otolaryngology-Head and Neck Surgery, Michigan Medicine, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5616, USA
| | - Kara C. Schvartz-Leyzac
- Department of Otolaryngology, Medical University of South Carolina, 135 Rutledge Ave, MSC 550, Charleston, SC 29425, USA
| | - Bryan E. Pfingst
- Kresge Hearing Research Institute, Department of Otolaryngology-Head and Neck Surgery, Michigan Medicine, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5616, USA
| | - Shuman He
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
- Department of Audiology, Nationwide Children’s Hospital, 700 Children’s Drive, Columbus, OH 43205, USA
| |
Collapse
|
24
|
Martin IA, Goupell MJ, Huang YT. Children's syntactic parsing and sentence comprehension with a degraded auditory signal. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:699. [PMID: 35232101 PMCID: PMC8816517 DOI: 10.1121/10.0009271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 10/15/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
During sentence comprehension, young children anticipate syntactic structures using early-arriving words and have difficulties revising incorrect predictions using late-arriving words. However, nearly all work to date has focused on syntactic parsing in idealized speech environments, and little is known about how children's strategies for predicting and revising meanings are affected by signal degradation. This study compares comprehension of active and passive sentences in natural and vocoded speech. In a word-interpretation task, 5-year-olds inferred the meanings of novel words in sentences that (1) encouraged agent-first predictions (e.g., The blicket is eating the seal implies The blicket is the agent), (2) required revising predictions (e.g., The blicket is eaten by the seal implies The blicket is the theme), or (3) weakened predictions by placing familiar nouns in sentence-initial position (e.g., The seal is eating/eaten by the blicket). When novel words promoted agent-first predictions, children misinterpreted passives as actives, and errors increased with vocoded compared to natural speech. However, when familiar words were sentence-initial that weakened agent-first predictions, children accurately interpreted passives, with no signal-degradation effects. This demonstrates that signal quality interacts with interpretive processes during sentence comprehension, and the impacts of speech degradation are greatest when late-arriving information conflicts with predictions.
Collapse
Affiliation(s)
- Isabel A Martin
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Yi Ting Huang
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
25
|
Ratnanather JT, Wang LC, Bae SH, O'Neill ER, Sagi E, Tward DJ. Visualization of Speech Perception Analysis via Phoneme Alignment: A Pilot Study. Front Neurol 2022; 12:724800. [PMID: 35087462 PMCID: PMC8787339 DOI: 10.3389/fneur.2021.724800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
Collapse
Affiliation(s)
- J Tilak Ratnanather
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Lydia C Wang
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Seung-Ho Bae
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Erin R O'Neill
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, New York, NY, United States
| | - Daniel J Tward
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.,Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
26
|
More Than Words: the Relative Roles of Prosody and Semantics in the Perception of Emotions in Spoken Language by Postlingual Cochlear Implant Users. Ear Hear 2022; 43:1378-1389. [PMID: 35030551 DOI: 10.1097/aud.0000000000001199] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The processing of emotional speech calls for the perception and integration of semantic and prosodic cues. Although cochlear implants allow for significant auditory improvements, they are limited in the transmission of spectro-temporal fine-structure information that may not support the processing of voice pitch cues. The goal of the current study is to compare the performance of postlingual cochlear implant (CI) users and a matched control group on perception, selective attention, and integration of emotional semantics and prosody. DESIGN Fifteen CI users and 15 normal hearing (NH) peers (age range, 18-65 years) 1istened to spoken sentences composed of different combinations of four discrete emotions (anger, happiness, sadness, and neutrality) presented in prosodic and semantic channels-T-RES: Test for Rating Emotions in Speech. In three separate tasks, listeners were asked to attend to the sentence as a whole, thus integrating both speech channels (integration), or to focus on one channel only (rating of target emotion) and ignore the other (selective attention). Their task was to rate how much they agreed that the sentence conveyed each of the predefined emotions. In addition, all participants performed standard tests of speech perception. RESULTS When asked to focus on one channel, semantics or prosody, both groups rated emotions similarly with comparable levels of selective attention. When the task was called for channel integration, group differences were found. CI users appeared to use semantic emotional information more than did their NH peers. CI users assigned higher ratings than did their NH peers to sentences that did not present the target emotion, indicating some degree of confusion. In addition, for CI users, individual differences in speech comprehension over the phone and identification of intonation were significantly related to emotional semantic and prosodic ratings, respectively. CONCLUSIONS CI users and NH controls did not differ in perception of prosodic and semantic emotions and in auditory selective attention. However, when the task called for integration of prosody and semantics, CI users overused the semantic information (as compared with NH). We suggest that as CI users adopt diverse cue weighting strategies with device experience, their weighting of prosody and semantics differs from those used by NH. Finally, CI users may benefit from rehabilitation strategies that strengthen perception of prosodic information to better understand emotional speech.
Collapse
|
27
|
Luo X, Azuma T, Kolberg C, Pulling KR. The effects of stimulus modality, task complexity, and cuing on working memory and the relationship with speech recognition in older cochlear implant users. JOURNAL OF COMMUNICATION DISORDERS 2022; 95:106170. [PMID: 34839068 DOI: 10.1016/j.jcomdis.2021.106170] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 11/15/2021] [Accepted: 11/15/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION The role of working memory (WM) in speech recognition of older cochlear implant (CI) users remains unclear. This study 1) examined the effects of aging and CI on WM performance across different modalities (auditory vs. visual) and cuing conditions, and 2) assessed how specific WM measures relate to sentence and word recognition in noise. METHOD Fourteen Older CI users, 12 Older acoustic-hearing (AH) listeners with age-appropriate hearing loss, and 15 Young normal-hearing (NH) listeners were tested. Participants completed two simple span tasks (auditory digit and visual letter span), two complex WM tasks (reading span and cued-modality WM with simultaneously presented auditory digits and visual letters), and two speech recognition tasks (sentence and word recognition in speech-babble noise). RESULTS The groups showed similar simple span performance, except that Older CI users had lower auditory digit span than Young NH listeners. Both older groups had similar reading span performance, but scored significantly lower than Young NH listeners, indicating age-related declines in attentional and phonological processing. A similar group effect was observed in the cued-modality WM task. All groups showed higher recall for auditory digits than for visual letters and the advantage was most evident without modality cuing. All groups displayed greater cuing benefits for visual recall than for auditory recall, suggesting that participants consistently allocated more attention to auditory stimuli regardless of cuing. For Older CI users, after controlling for the previously reported spectral resolution, auditory-uncued WM performance was significantly correlated with word recognition but not sentence recognition. CONCLUSIONS Complex WM was significantly affected by aging but not by CI. Neither aging nor CI significantly affected modality cuing benefits in the WM task. For Older CI users, complex auditory WM with attentional control may better reflect the cognitive load of speech recognition in noise than simple span or complex visual WM.
Collapse
Affiliation(s)
- Xin Luo
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States of America.
| | - Tamiko Azuma
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States of America
| | - Courtney Kolberg
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States of America
| | - Kathryn R Pulling
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, United States of America
| |
Collapse
|
28
|
Moberly AC, Lewis JH, Vasil KJ, Ray C, Tamati TN. Bottom-Up Signal Quality Impacts the Role of Top-Down Cognitive-Linguistic Processing During Speech Recognition by Adults with Cochlear Implants. Otol Neurotol 2021; 42:S33-S41. [PMID: 34766942 PMCID: PMC8597903 DOI: 10.1097/mao.0000000000003377] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
HYPOTHESES Significant variability persists in speech recognition outcomes in adults with cochlear implants (CIs). Sensory ("bottom-up") and cognitive-linguistic ("top-down") processes help explain this variability. However, the interactions of these bottom-up and top-down factors remain unclear. One hypothesis was tested: top-down processes would contribute differentially to speech recognition, depending on the fidelity of bottom-up input. BACKGROUND Bottom-up spectro-temporal processing, assessed using a Spectral-Temporally Modulated Ripple Test (SMRT), is associated with CI speech recognition outcomes. Similarly, top-down cognitive-linguistic skills relate to outcomes, including working memory capacity, inhibition-concentration, speed of lexical access, and nonverbal reasoning. METHODS Fifty-one adult CI users were tested for word and sentence recognition, along with performance on the SMRT and a battery of cognitive-linguistic tests. The group was divided into "low-," "intermediate-," and "high-SMRT" groups, based on SMRT scores. Separate correlation analyses were performed for each subgroup between a composite score of cognitive-linguistic processing and speech recognition. RESULTS Associations of top-down composite scores with speech recognition were not significant for the low-SMRT group. In contrast, these associations were significant and of medium effect size (Spearman's rho = 0.44-0.46) for two sentence types for the intermediate-SMRT group. For the high-SMRT group, top-down scores were associated with both word and sentence recognition, with medium to large effect sizes (Spearman's rho = 0.45-0.58). CONCLUSIONS Top-down processes contribute differentially to speech recognition in CI users based on the quality of bottom-up input. Findings have clinical implications for individualized treatment approaches relying on bottom-up device programming or top-down rehabilitation approaches.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Jessica H Lewis
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Kara J Vasil
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Christin Ray
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Terrin N Tamati
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
- Department of Otorhinolaryngology - Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
29
|
Development and Evaluation of a Language-Independent Test of Auditory Discrimination for Referrals for Cochlear Implant Candidacy Assessment. Ear Hear 2021; 43:1151-1163. [PMID: 34812793 PMCID: PMC9197147 DOI: 10.1097/aud.0000000000001166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to (1) develop a Language-independent Test of Auditory Discrimination (LIT-AD) between speech sounds so that people with hearing loss who derive limited speech perception benefits from hearing aids (HAs) may be identified for consideration of cochlear implantation and (2) examine the relationship between the scores for the new discrimination test and those of a standard sentence test for adults wearing either HAs or cochlear implants (CIs). DESIGN The test measures the ability of the listener to correctly discriminate pairs of nonsense syllables, presented as sequential triplets in an odd-one-out format, implemented as a game-based software tool for self-administration using a tablet computer. Stage 1 included first a review of phonemic inventories in the 40 most common languages in the world to select the consonants and vowels. Second, discrimination testing of 50 users of CIs at several signal to noise ratios (SNRs) was carried out to generate psychometric functions. These were used to calculate the corrections in SNR for each consonant-pair and vowel combination required to equalize difficulty across items. Third, all items were individually equalized in difficulty and the overall difficulty set. Stage 2 involved the validation of the LIT-AD in English-speaking listeners by comparing discrimination scores with performance in a standard sentence test. Forty-one users of HAs and 40 users of CIs were assessed. Correlation analyses were conducted to examine test-retest reliability and the relationship between performance in the two tests. Multiple regression analyses were used to examine the relationship between demographic characteristics and performance in the LIT-AD. The scores of the CI users were used to estimate the probability of superior performance with CIs for a non-CI user having a given LIT-AD score and duration of hearing loss. RESULTS The LIT-AD comprises 81 pairs of vowel-consonant-vowel syllables that were equalized in difficulty to discriminate. The test can be self-administered on a tablet computer, and it takes about 10 min to complete. The software automatically scores the responses and gives an overall score and a list of confusable items as output. There was good test-retest reliability. On average, higher LIT-AD discrimination scores were associated with better sentence perception for users of HAs (r = -0.54, p <0.001) and users of CIs (r = -0.73, p <0.001). The probability of superior performance with CIs for a certain LIT-AD score was estimated, after allowing for the effect of duration of hearing loss. CONCLUSIONS The LIT-AD could increase access to CIs by screening for those who obtain limited benefits from HAs to facilitate timely referrals for CI candidacy evaluation. The test results can be used to provide patients and professionals with practical information about the probability of potential benefits for speech perception from cochlear implantation. The test will need to be evaluated for speakers of languages other than English to facilitate adoption in different countries.
Collapse
|
30
|
Abstract
OBJECTIVES First, to evaluate the effect of laboratory-based test realism on speech intelligibility outcomes of cochlear implant users. Second, to conduct an exploratory investigation of speech intelligibility of cochlear implant users, including bilateral benefit, under realistic laboratory conditions. DESIGN For the first goal, the authors measured speech intelligibility scores of 15 bilateral cochlear implant recipients under three different test realism levels at two different signal-to-noise ratios (SNRs). The levels included (1) standard Bamford-Kowal-Bench-like sentences with spatially separated standard babble noise; (2) standard Bamford-Kowal-Bench-like sentences with three-dimensional recordings of actual situations; and (3) a variation of the second realism level where the sentences were obtained from natural effortful conversations. For the second goal, speech intelligibility of the realistic speech material was measured in six different acoustic scenes with realistic signal-to-noise ratios ranging from -5.8 dB to 3.2 dB. RESULTS Speech intelligibility was consistently highest in the most artificial (standard) test and lowest in the most realistic test. The effect of the realistic noise and that of the realistic speech material resulted in distinct SNR-dependent performance shifts with respect to their baselines. Speech intelligibility in realistic laboratory conditions was in general low, with mean scores around 60% at the highest SNR. Bilateral benefit provided on average a 7% benefit over unilateral speech understanding in the better-performing ear. CONCLUSIONS The results obtained here suggest that standard speech-in-noise tests overestimate the performance of cochlear implant recipients in the real world. To address this limitation, future assessments need to improve the realism over current tests by considering the realism of both, the speech and the noise materials. Likewise, speech intelligibility data under realistic conditions suggest that, insofar as these results can be considered representative of real-life performance, conversational speech and noise levels common to cochlear implant recipients are challenging in terms of speech intelligibility, with average scores around 60%. The findings and limitations are discussed alongside the factors affecting speech intelligibility.
Collapse
|
31
|
Heffner CC, Jaekel BN, Newman RS, Goupell MJ. Accuracy and cue use in word segmentation for cochlear-implant listeners and normal-hearing listeners presented vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2936. [PMID: 34717484 PMCID: PMC8528550 DOI: 10.1121/10.0006448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 09/07/2021] [Accepted: 09/09/2021] [Indexed: 06/13/2023]
Abstract
Cochlear-implant (CI) listeners experience signal degradation, which leads to poorer speech perception than normal-hearing (NH) listeners. In the present study, difficulty with word segmentation, the process of perceptually parsing the speech stream into separate words, is considered as a possible contributor to this decrease in performance. CI listeners were compared to a group of NH listeners (presented with unprocessed speech and eight-channel noise-vocoded speech) in their ability to segment phrases with word segmentation ambiguities (e.g., "an iceman" vs "a nice man"). The results showed that CI listeners and NH listeners were worse at segmenting words when hearing processed speech than NH listeners were when presented with unprocessed speech. When viewed at a broad level, all of the groups used cues to word segmentation in similar ways. Detailed analyses, however, indicated that the two processed speech groups weighted top-down knowledge cues to word boundaries more and weighted acoustic cues to word boundaries less relative to NH listeners presented with unprocessed speech.
Collapse
Affiliation(s)
- Christopher C Heffner
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland 20742, USA
| | - Brittany N Jaekel
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
32
|
Kang H, Macherey O, Roman S, Pressnitzer D. Auditory memory for random time patterns in cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1934. [PMID: 34598651 DOI: 10.1121/10.0005728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 07/01/2021] [Indexed: 06/13/2023]
Abstract
Learning about new sounds is essential for cochlear-implant and normal-hearing listeners alike, with the additional challenge for implant listeners that spectral resolution is severely degraded. Here, a task measuring the rapid learning of slow or fast stochastic temporal sequences [Kang, Agus, and Pressnitzer (2017). J. Acoust. Soc. Am. 142, 2219-2232] was performed by cochlear-implant (N = 10) and normal-hearing (N = 9) listeners, using electric or acoustic pulse sequences, respectively. Rapid perceptual learning was observed for both groups, with highly similar characteristics. Moreover, for cochlear-implant listeners, an additional condition tested ultra-fast electric pulse sequences that would be impossible to represent temporally when presented acoustically. This condition also demonstrated learning. Overall, the results suggest that cochlear-implant listeners have access to the neural plasticity mechanisms needed for the rapid perceptual learning of complex temporal sequences.
Collapse
Affiliation(s)
- HiJee Kang
- Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, École Normale Supérieure, PSL University, CNRS, 29 Rue d'Ulm, 75005 Paris, France
| | - Olivier Macherey
- Aix-Marseille University, CNRS, Centrale Marseille, LMA, 4 impasse Nikola Tesla, CS40006, 13453 Marseille, Cedex 13, France
| | - Stéphane Roman
- Department of Pediatric Otolaryngology and Neck Surgery, Aix-Marseille University, 264 Rue Saint Pierre, 13005 Marseille, France
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, École Normale Supérieure, PSL University, CNRS, 29 Rue d'Ulm, 75005 Paris, France
| |
Collapse
|
33
|
Bosen AK, Sevich VA, Cannon SA. Forward Digit Span and Word Familiarity Do Not Correlate With Differences in Speech Recognition in Individuals With Cochlear Implants After Accounting for Auditory Resolution. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3330-3342. [PMID: 34251908 PMCID: PMC8740688 DOI: 10.1044/2021_jslhr-20-00574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/12/2021] [Accepted: 04/09/2021] [Indexed: 06/07/2023]
Abstract
Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.
Collapse
Affiliation(s)
| | - Victoria A. Sevich
- Boys Town National Research Hospital, Omaha, NE
- The Ohio State University, Columbus
| | | |
Collapse
|
34
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|
35
|
Imsiecke M, Büchner A, Lenarz T, Nogueira W. Amplitude Growth Functions of Auditory Nerve Responses to Electric Pulse Stimulation With Varied Interphase Gaps in Cochlear Implant Users With Ipsilateral Residual Hearing. Trends Hear 2021; 25:23312165211014137. [PMID: 34181493 PMCID: PMC8243142 DOI: 10.1177/23312165211014137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Amplitude growth functions (AGFs) of electrically evoked compound action
potentials (eCAPs) with varying interphase gaps (IPGs) were measured in cochlear
implant users with ipsilateral residual hearing (electric-acoustic stimulation
[EAS]). It was hypothesized that IPG effects on AGFs provide an objective
measure to estimate neural health. This hypothesis was tested in EAS users, as
residual low-frequency hearing might imply survival of hair cells and hence
better neural health in apical compared to basal cochlear regions. A total of 16
MED-EL EAS subjects participated, as well as a control group of 16 deaf cochlear
implant users. The IPG effect on the AGF characteristics of slope, threshold,
dynamic range, and stimulus level at 50% maximum eCAP amplitude
(level50%) was investigated. AGF threshold and
level50% were significantly affected by the IPG in both EAS and
control group. The magnitude of AGF characteristics correlated with electrode
impedance and electrode-modiolus distance (EMD) in both groups. In contrast, the
change of the AGF characteristics with increasing IPG was independent of these
electrode-specific measures. The IPG effect on the AGF level50% in
both groups, as well as on the threshold in EAS users, correlated with the
duration of hearing loss, which is a predictor of neural health. In EAS users, a
significantly different IPG effect on level50% was found between
apical and medial electrodes. This outcome is consistent with our hypothesis
that the influence of IPG effects on AGF characteristics provides a sensitive
measurement and may indicate better neural health in the apex compared to the
medial cochlear region in EAS users.
Collapse
Affiliation(s)
- Marina Imsiecke
- Clinic for Otorhinolaryngology, Hannover Medical School, Hannover, Germany
| | - Andreas Büchner
- Clinic for Otorhinolaryngology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4All," Hannover, Germany
| | - Thomas Lenarz
- Clinic for Otorhinolaryngology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4All," Hannover, Germany
| | - Waldo Nogueira
- Clinic for Otorhinolaryngology, Hannover Medical School, Hannover, Germany.,Cluster of Excellence "Hearing4All," Hannover, Germany
| |
Collapse
|
36
|
Schvartz-Leyzac KC, Zwolan TA, Pfingst BE. Using the electrically-evoked compound action potential (ECAP) interphase gap effect to select electrode stimulation sites in cochlear implant users. Hear Res 2021; 406:108257. [PMID: 34020316 DOI: 10.1016/j.heares.2021.108257] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 03/25/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022]
Abstract
Studies in cochlear implanted animals show that the IPG Effect for ECAP growth functions (i.e., the magnitude of the change in ECAP amplitude growth function (AGF) slope or peak amplitude when the interphase gap (IPG) is increased) can be used to estimate the densities of spiral ganglion neurons (SGNs) near the electrode stimulation and recording sites. In humans, the same ECAP IPG Effect measures correlate with speech recognition performance. The present study examined the efficacy of selecting electrode sites for stimulation based on the IPG Effect, in order to improve performance of CI users on speech recognition tasks. We measured the ECAP IPG Effect for peak amplitude in adult (>18 years old) CI users (N= 18 ears), and created experimental programs to stimulate electrodes with either the highest or lowest ECAP IPG Effect for peak amplitude. Subjects also listened to a program without any electrodes deactivated. In a subset of subject ears (11/18), we compared performance differences between the experimental programs to post-operative computerized tomography (CT) scans to examine underlying factors that might contribute to the efficacy of an electrode site-selection approach. For sentences-in-noise, average performance was better when subjects listened to the experimental program that stimulated electrodes with the highest rather than the lowest IPG Effect for ECAP peak amplitude. A similar pattern was noted for transmission and perception of consonant place cues in a consonant recognition task. However, on average, performance when listening to a program with higher IPG Effect values was equal to that when listening with all electrodes activated. Results also suggest that scalar location (scala tympani or vestibuli) should be considered when using an ECAP-based electrode site-selection procedure to optimize CI performance.
Collapse
Affiliation(s)
- Kara C Schvartz-Leyzac
- Kresge Hearing Research Institute, Department of Otolaryngology, Michigan Medicine, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5616, United States; Hearing Rehabilitation Center, Department of Otolaryngology, Michigan Medicine, 475 W. Market Place, Building 1, Suite A, Ann Arbor, MI 48108, United States.
| | - Teresa A Zwolan
- Hearing Rehabilitation Center, Department of Otolaryngology, Michigan Medicine, 475 W. Market Place, Building 1, Suite A, Ann Arbor, MI 48108, United States
| | - Bryan E Pfingst
- Kresge Hearing Research Institute, Department of Otolaryngology, Michigan Medicine, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5616, United States
| |
Collapse
|
37
|
Mesik J, Ray L, Wojtczak M. Effects of Age on Cortical Tracking of Word-Level Features of Continuous Competing Speech. Front Neurosci 2021; 15:635126. [PMID: 33867920 PMCID: PMC8047075 DOI: 10.3389/fnins.2021.635126] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 03/12/2021] [Indexed: 01/17/2023] Open
Abstract
Speech-in-noise comprehension difficulties are common among the elderly population, yet traditional objective measures of speech perception are largely insensitive to this deficit, particularly in the absence of clinical hearing loss. In recent years, a growing body of research in young normal-hearing adults has demonstrated that high-level features related to speech semantics and lexical predictability elicit strong centro-parietal negativity in the EEG signal around 400 ms following the word onset. Here we investigate effects of age on cortical tracking of these word-level features within a two-talker speech mixture, and their relationship with self-reported difficulties with speech-in-noise understanding. While undergoing EEG recordings, younger and older adult participants listened to a continuous narrative story in the presence of a distractor story. We then utilized forward encoding models to estimate cortical tracking of four speech features: (1) word onsets, (2) "semantic" dissimilarity of each word relative to the preceding context, (3) lexical surprisal for each word, and (4) overall word audibility. Our results revealed robust tracking of all features for attended speech, with surprisal and word audibility showing significantly stronger contributions to neural activity than dissimilarity. Additionally, older adults exhibited significantly stronger tracking of word-level features than younger adults, especially over frontal electrode sites, potentially reflecting increased listening effort. Finally, neuro-behavioral analyses revealed trends of a negative relationship between subjective speech-in-noise perception difficulties and the model goodness-of-fit for attended speech, as well as a positive relationship between task performance and the goodness-of-fit, indicating behavioral relevance of these measures. Together, our results demonstrate the utility of modeling cortical responses to multi-talker speech using complex, word-level features and the potential for their use to study changes in speech processing due to aging and hearing loss.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | | | |
Collapse
|
38
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
39
|
Icht M, Mama Y, Taitelbaum-Swead R. Visual and Auditory Verbal Memory in Older Adults: Comparing Postlingually Deaf Cochlear Implant Users to Normal-Hearing Controls. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3865-3876. [PMID: 33049151 DOI: 10.1044/2020_jslhr-20-00170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based biases. Method Participants performed two separate visual and auditory verbal memory tasks. In each task, the visually or aurally presented study words were learned by vocal production (saying aloud) or by no production (reading silently or listening), followed by a free recall test. Twenty-seven older adults (> 60 years) participated (OCI = 13, ONH = 14), all of whom demonstrated intact cognitive abilities. All OCIs showed good open-set speech perception results in quiet. Results Both ONHs and OCIs showed production benefits (higher recall rates for vocalized than nonvocalized words) in the visual and auditory tasks. The ONHs showed similar production benefits in the visual and auditory tasks. The OCIs demonstrated a smaller production effect in the auditory task. Conclusions These results may indicate that different modality-specific memory strategies were used by the ONHs and the OCIs. The group differences in memory performance suggest that, even when deafness occurs after the completion of language acquisition, the reduced and distorted external auditory stimulation leads to a deterioration in the phonological representation of sounds. Possibly, this deterioration leads to a less efficient auditory long-term verbal memory.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| | - Yaniv Mama
- Department of Behavioral Sciences and Psychology, Ariel University, Israel
| | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
40
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Development and Validation of Sentences Without Semantic Context to Complement the Basic English Lexicon Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3847-3854. [PMID: 33049146 PMCID: PMC8582750 DOI: 10.1044/2020_jslhr-20-00174] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Purpose The goal of this study was to develop and validate a new corpus of sentences without semantic context to facilitate research aimed at isolating the effects of semantic context in speech perception. Method The newly developed corpus contains nonsensical sentences but is matched in vocabulary and syntactic structure to the existing Basic English Lexicon (BEL) corpus. It consists of 20 lists, with each list containing 25 sentences and each sentence having four keywords. Each new list contains the same keywords as the respective list in the original BEL corpus, but the keywords within each list are scrambled across sentences to eliminate semantic context within each sentence, while maintaining the original syntactic structure. All sentences in the original and nonsense BEL corpora were recorded by the same two male and two female talkers. Results Mean intelligibility scores for each list were estimated by calculating the mean proportion of correct keywords achieved by 40 normal-hearing listeners for one male and one female talker. Although small but significant differences were found between some pairs of lists, mean performance for all 20 lists fell within the 95% confidence intervals of the mean. Conclusions Lists in the newly developed nonsense corpus are reasonably well equated for difficulty and can be used interchangeably in a randomized experimental design. Both the original and nonsense BEL sentences, all recorded by the same four talkers, are publicly available. Supplemental Material https://doi.org/10.23641/asha.13022900.
Collapse
Affiliation(s)
- Erin R. O'Neill
- Department of Psychology, University of Minnesota, Minneapolis
| | - Morgan N. Parke
- Department of Psychology, University of Minnesota, Minneapolis
| | | | | |
Collapse
|
41
|
Skidmore JA, Vasil KJ, He S, Moberly AC. Explaining Speech Recognition and Quality of Life Outcomes in Adult Cochlear Implant Users: Complementary Contributions of Demographic, Sensory, and Cognitive Factors. Otol Neurotol 2020; 41:e795-e803. [PMID: 32558759 PMCID: PMC7875311 DOI: 10.1097/mao.0000000000002682] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
HYPOTHESES Adult cochlear implant (CI) outcomes depend on demographic, sensory, and cognitive factors. However, these factors have not been examined together comprehensively for relations to different outcome types, such as speech recognition versus quality of life (QOL). Three hypotheses were tested: 1) speech recognition will be explained most strongly by sensory factors, whereas QOL will be explained more strongly by cognitive factors. 2) Different speech recognition outcome domains (sentences versus words) and different QOL domains (physical versus social versus psychological functioning) will be explained differentially by demographic, sensory, and cognitive factors. 3) Including cognitive factors as predictors will provide more power to explain outcomes than demographic and sensory predictors alone. BACKGROUND A better understanding of the contributors to CI outcomes is needed to prognosticate outcomes before surgery, explain outcomes after surgery, and tailor rehabilitation efforts. METHODS Forty-one adult postlingual experienced CI users were assessed for sentence and word recognition, as well as hearing-related QOL, along with a broad collection of predictors. Partial least squares regression was used to identify factors that were most predictive of outcome measures. RESULTS Supporting our hypotheses, speech recognition abilities were most strongly dependent on sensory skills, while QOL outcomes required a combination of cognitive, sensory, and demographic predictors. The inclusion of cognitive measures increased the ability to explain outcomes, mainly for QOL. CONCLUSIONS Explaining variability in adult CI outcomes requires a broad assessment approach. Identifying the most important predictors depends on the particular outcome domain and even the particular measure of interest.
Collapse
Affiliation(s)
- Jeffrey A Skidmore
- The Ohio State University Wexner Medical Center, Department of Otolaryngology-Head & Neck Surgery, Columbus, Ohio
| | | | | | | |
Collapse
|
42
|
Zaltz Y, Bugannim Y, Zechoval D, Kishon-Rabin L, Perez R. Listening in Noise Remains a Significant Challenge for Cochlear Implant Users: Evidence from Early Deafened and Those with Progressive Hearing Loss Compared to Peers with Normal Hearing. J Clin Med 2020; 9:jcm9051381. [PMID: 32397101 PMCID: PMC7290476 DOI: 10.3390/jcm9051381] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 04/28/2020] [Accepted: 05/05/2020] [Indexed: 01/17/2023] Open
Abstract
Cochlear implants (CIs) are the state-of-the-art therapy for individuals with severe to profound hearing loss, providing them with good functional hearing. Nevertheless, speech understanding in background noise remains a significant challenge. The purposes of this study were to: (1) conduct a novel within-study comparison of speech-in-noise performance across ages in different populations of CI and normal hearing (NH) listeners using an adaptive sentence-in-noise test, and (2) examine the relative contribution of sensory information and cognitive–linguistic factors to performance. Forty CI users (mean age 20 years) were divided into “early-implanted” <4 years (n = 16) and “late-implanted” >6 years (n = 11), all prelingually deafened, and “progressively deafened” (n = 13). The control group comprised 136 NH subjects (80 children, 56 adults). Testing included the Hebrew Matrix test, word recognition in quiet, and linguistic and cognitive tests. Results show poorer performance in noise for CI users across populations and ages compared to NH peers, and age at implantation and word recognition in quiet were found to be contributing factors. For those recognizing 50% or more of the words in quiet (n = 27), non-verbal intelligence and receptive vocabulary explained 63% of the variance in noise. This information helps delineate the relative contribution of top-down and bottom-up skills for speech recognition in noise and can help set expectations in CI counseling.
Collapse
Affiliation(s)
- Yael Zaltz
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
- Correspondence:
| | - Yossi Bugannim
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Doreen Zechoval
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Liat Kishon-Rabin
- The Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo 6997801, Israel; (Y.B.); (D.Z.); (L.K.-R.)
| | - Ronen Perez
- Department of Otolaryngology and Head and Neck Surgery, Shaare Zedek Medical Center Affiliated to The Hebrew University Medical School, Jerusalem 9190501, Israel;
| |
Collapse
|
43
|
Bosen AK, Barry MF. Serial Recall Predicts Vocoded Sentence Recognition Across Spectral Resolutions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1282-1298. [PMID: 32213149 PMCID: PMC7242981 DOI: 10.1044/2020_jslhr-19-00319] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Purpose The goal of this study was to determine how various aspects of cognition predict speech recognition ability across different levels of speech vocoding within a single group of listeners. Method We tested the ability of young adults (N = 32) with normal hearing to recognize Perceptually Robust English Sentence Test Open-set (PRESTO) sentences that were degraded with a vocoder to produce different levels of spectral resolution (16, eight, and four carrier channels). Participants also completed tests of cognition (fluid intelligence, short-term memory, and attention), which were used as predictors of sentence recognition. Sentence recognition was compared across vocoder conditions, predictors were correlated with individual differences in sentence recognition, and the relationships between predictors were characterized. Results PRESTO sentence recognition performance declined with a decreasing number of vocoder channels, with no evident floor or ceiling performance in any condition. Individual ability to recognize PRESTO sentences was consistent relative to the group across vocoder conditions. Short-term memory, as measured with serial recall, was a moderate predictor of sentence recognition (ρ = 0.65). Serial recall performance was constant across vocoder conditions when measured with a digit span task. Fluid intelligence was marginally correlated with serial recall, but not sentence recognition. Attentional measures had no discernible relationship to sentence recognition and a marginal relationship with serial recall. Conclusions Verbal serial recall is a substantial predictor of vocoded sentence recognition, and this predictive relationship is independent of spectral resolution. In populations that show variable speech recognition outcomes, such as listeners with cochlear implants, it should be possible to account for the independent effects of spectral resolution and verbal serial recall in their speech recognition ability. Supplemental Material https://doi.org/10.23641/asha.12021051.
Collapse
|