1
|
Lebiecka-Johansen P, Zekveld AA, Wendt D, Koelewijn T, Muhammad AI, Kramer SE. Classification of Hearing Status Based on Pupil Measures During Sentence Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:1188-1208. [PMID: 39951463 DOI: 10.1044/2024_jslhr-24-00005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2025]
Abstract
PURPOSE Speech understanding in noise can be effortful, especially for people with hearing impairment. To compensate for reduced acuity, hearing-impaired (HI) listeners may be allocating listening effort differently than normal-hearing (NH) peers. We expected that this might influence measures derived from the pupil dilation response. To investigate this in more detail, we assessed the sensitivity of pupil measures to hearing-related changes in effort allocation. We used a machine learning-based classification framework capable of combining and ranking measures to examine hearing-related, stimulus-related (signal-to-noise ratio [SNR]), and task response-related changes in pupil measures. METHOD Pupil data from 32 NH (40-70 years old, M = 51.3 years, six males) and 32 HI (31-76 years old, M = 59 years, 13 males) listeners were recorded during an adaptive speech reception threshold test. Peak pupil dilation (PPD), mean pupil dilation (MPD), principal pupil components (rotated principal components [RPCs]), and baseline pupil size (BPS) were calculated. As a precondition for ranking pupil measures, the ability to classify hearing status (NH/HI), SNR (high/low), and task response (correct/incorrect) above random prediction level was assessed. This precondition was met when classifying hearing status in subsets of data with varying SNR and task response, SNR in the NH group, and task response in the HI group. RESULTS A combination of pupil measures was necessary to classify the dependent factors. Hearing status, SNR, and task response were predicted primarily by the established measures-PPD (maximum effort), RPC2 (speech processing), and BPS (task anticipation)-and by the novel measures RPC1 (listening) and RPC3 (response preparation) in tasks involving SNR as an outcome or sometimes difficulty criterion. CONCLUSIONS A machine learning-based classification framework can assess sensitivity of, and rank the importance of, pupil measures in relation to three effort modulators (factors) during speech perception in noise. This indicates that the effects of these factors on the pupil measures allow for reasonable classification performance. Moreover, the varying contributions of each measure to the classification models suggest they are not equally affected by these factors. Thus, this study enhances our understanding of pupil responses and their sensitivity to relevant factors. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.28225199.
Collapse
Affiliation(s)
- Patrycja Lebiecka-Johansen
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A Zekveld
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Kongens Lyngby
| | - Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, the Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Afaan I Muhammad
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| | - Sophia E Kramer
- Department of Otolaryngology/Head & Neck Surgery, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam Public Health Research Institute, the Netherlands
| |
Collapse
|
2
|
Harel-Arbeli T, Shaposhnik H, Palgi Y, Ben-David BM. Taking the Extra Listening Mile: Processing Spoken Semantic Context Is More Effortful for Older Than Young Adults. Ear Hear 2025; 46:315-324. [PMID: 39219019 DOI: 10.1097/aud.0000000000001582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
OBJECTIVES Older adults use semantic context to generate predictions in speech processing, compensating for aging-related sensory and cognitive changes. This study aimed to gauge aging-related changes in effort exertion related to context use. DESIGN The study revisited data from Harel-Arbeli et al. (2023) that used a "visual-world" eye-tracking paradigm. Data on efficiency of context use (response latency and the probability to gaze at the target before hearing it) and effort exertion (pupil dilation) were extracted from a subset of 14 young adults (21 to 27 years old) and 13 older adults (65 to 79 years old). RESULTS Both age groups showed a similar pattern of context benefits for response latency and target word predictions, however only the older adults group showed overall increased pupil dilation when listening to context sentences. CONCLUSIONS Older adults' efficient use of spoken semantic context appears to come at a cost of increased effort exertion.
Collapse
Affiliation(s)
- Tami Harel-Arbeli
- Department of Gerontology, Haifa University, Haifa, Israel
- Communication, Aging and Neuropsychology Lab, Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Communication Disorders, Achva Academic College, Arugot, Israel
| | - Hagit Shaposhnik
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yuval Palgi
- Department of Gerontology, Haifa University, Haifa, Israel
| | - Boaz M Ben-David
- Communication, Aging and Neuropsychology Lab, Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- KITE, Toronto Rehabilitation Institute, University Health Networks, Toronto, Ontario, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Abdel-Latif KHA, Koelewijn T, Başkent D, Meister H. Assessment of Speech Processing and Listening Effort Associated With Speech-on-Speech Masking Using the Visual World Paradigm and Pupillometry. Trends Hear 2025; 29:23312165241306091. [PMID: 39800920 PMCID: PMC11726529 DOI: 10.1177/23312165241306091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 10/16/2024] [Accepted: 11/23/2024] [Indexed: 01/16/2025] Open
Abstract
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal. This study aimed to propose a new VWP to examine the time course of speech segregation when competing sentences are presented and to collect pupil size data as a measure of listening effort. Twelve young normal-hearing participants were presented with competing matrix sentences (structure "name-verb-numeral-adjective-object") diotically via headphones at four target-to-masker ratios (TMRs), corresponding to intermediate to near perfect speech recognition. The VWP visually presented the number and object words from both the target and masker sentences. Participants were instructed to gaze at the corresponding words of the target sentence without providing verbal responses. The gaze fixations consistently reflected the different TMRs for both number and object words. The slopes of the fixation curves were steeper, and the proportion of target fixations increased with higher TMRs, suggesting more efficient segregation under more favorable conditions. Temporal analysis of pupil data using Bayesian paired sample t-tests showed a corresponding reduction in pupil dilation with increasing TMR, indicating reduced listening effort. The results support the conclusion that the proposed VWP and the captured eye movements and pupil dilation are suitable for objective assessment of sentence-based speech-on-speech segregation and the corresponding listening effort.
Collapse
Affiliation(s)
- Khaled H. A. Abdel-Latif
- Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - Hartmut Meister
- Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| |
Collapse
|
4
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
5
|
Shin J, Noh S, Park J, Sung JE. Syntactic complexity differentially affects auditory sentence comprehension performance for individuals with age-related hearing loss. Front Psychol 2023; 14:1264994. [PMID: 37965654 PMCID: PMC10641445 DOI: 10.3389/fpsyg.2023.1264994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Objectives This study examined whether older adults with hearing loss (HL) experience greater difficulties in auditory sentence comprehension compared to those with typical-hearing (TH) when the linguistic burdens of syntactic complexity were systematically manipulated by varying either the sentence type (active vs. passive) or sentence length (3- vs. 4-phrases). Methods A total of 22 individuals with HL and 24 controls participated in the study, completing sentence comprehension test (SCT), standardized memory assessments, and pure-tone audiometry tests. Generalized linear mixed effects models were employed to compare the effects of sentence type and length on SCT accuracy, while Pearson correlation coefficients were conducted to explore the relationships between SCT accuracy and other factors. Additionally, stepwise regression analyses were employed to identify memory-related predictors of sentence comprehension ability. Results Older adults with HL exhibited poorer performance on passive sentences than on active sentences compared to controls, while the sentence length was controlled. Greater difficulties on passive sentences were linked to working memory capacity, emerging as the most significant predictor for the comprehension of passive sentences among participants with HL. Conclusion Our findings contribute to the understanding of the linguistic-cognitive deficits linked to age-related hearing loss by demonstrating its detrimental impact on the processing of passive sentences. Cognitively healthy adults with hearing difficulties may face challenges in comprehending syntactically more complex sentences that require higher computational demands, particularly in working memory allocation.
Collapse
Affiliation(s)
| | | | | | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
7
|
Lemel R, Shalev L, Nitsan G, Ben-David BM. Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 133:104401. [PMID: 36577332 DOI: 10.1016/j.ridd.2022.104401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/23/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Collapse
Affiliation(s)
- Rony Lemel
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Lilach Shalev
- Constantiner School of Education and Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| | - Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada; Toronto Rehabilitation Institute, University Health Networks (UHN), ON, Canada.
| |
Collapse
|
8
|
O’Leary RM, Neukam J, Hansen TA, Kinney AJ, Capach N, Svirsky MA, Wingfield A. Strategic Pauses Relieve Listeners from the Effort of Listening to Fast Speech: Data Limited and Resource Limited Processes in Narrative Recall by Adult Users of Cochlear Implants. Trends Hear 2023; 27:23312165231203514. [PMID: 37941344 PMCID: PMC10637151 DOI: 10.1177/23312165231203514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/11/2023] [Accepted: 09/08/2023] [Indexed: 11/10/2023] Open
Abstract
Speech that has been artificially accelerated through time compression produces a notable deficit in recall of the speech content. This is especially so for adults with cochlear implants (CI). At the perceptual level, this deficit may be due to the sharply degraded CI signal, combined with the reduced richness of compressed speech. At the cognitive level, the rapidity of time-compressed speech can deprive the listener of the ordinarily available processing time present when speech is delivered at a normal speech rate. Two experiments are reported. Experiment 1 was conducted with 27 normal-hearing young adults as a proof-of-concept demonstration that restoring lost processing time by inserting silent pauses at linguistically salient points within a time-compressed narrative ("time-restoration") returns recall accuracy to a level approximating that for a normal speech rate. Noise vocoder conditions with 10 and 6 channels reduced the effectiveness of time-restoration. Pupil dilation indicated that additional effort was expended by participants while attempting to process the time-compressed narratives, with the effortful demand on resources reduced with time restoration. In Experiment 2, 15 adult CI users tested with the same (unvocoded) materials showed a similar pattern of behavioral and pupillary responses, but with the notable exception that meaningful recovery of recall accuracy with time-restoration was limited to a subgroup of CI users identified by better working memory spans, and better word and sentence recognition scores. Results are discussed in terms of sensory-cognitive interactions in data-limited and resource-limited processes among adult users of cochlear implants.
Collapse
Affiliation(s)
- Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | - Jonathan Neukam
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| | | | - Nicole Capach
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, New York, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
9
|
Nitsan G, Baharav S, Tal-Shir D, Shakuf V, Ben-David BM. Speech Processing as a Far-Transfer Gauge of Serious Games for Cognitive Training in Aging: Randomized Controlled Trial of Web-Based Effectivate Training. JMIR Serious Games 2022; 10:e32297. [PMID: 35900825 PMCID: PMC9400949 DOI: 10.2196/32297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.
Collapse
Affiliation(s)
- Gal Nitsan
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel.,Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Shai Baharav
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Dalith Tal-Shir
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Vered Shakuf
- Department of Communications Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
10
|
Saryazdi R, Nuque J, Chambers CG. Pragmatic inferences in aging and human-robot communication. Cognition 2022; 223:105017. [PMID: 35131577 DOI: 10.1016/j.cognition.2022.105017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 06/12/2021] [Accepted: 01/05/2022] [Indexed: 12/30/2022]
Abstract
Despite the increase in research on older adults' communicative behavior, little work has explored patterns of age-related change in pragmatic inferencing and how these patterns are adapted depending on the situation-specific context. In two eye-tracking experiments, participants followed instructions like "Click on the greenhouse", which were either played over speakers or spoken live by a co-present robot partner. Implicit inferential processes were measured by exploring the extent to which listeners temporarily (mis)understood the unfolding noun to be a modified phrase referring to a competitor object in the display (green hat). This competitor was accompanied by either another member of the same category or an unrelated item (tan hat vs. dice). Experiment 1 (no robot) showed clear evidence of contrastive inferencing in both younger and older adults (more looks to the green hat when the tan hat was also present). Experiment 2 explored the ability to suppress these contrastive inferences when the robot talker was known to lack any color perception, making descriptions like "green hat" implausible. Younger but not older listeners were able to suppress contrastive inferences in this context, suggesting older adults could not keep the relevant limitations in mind and/or were more likely to spontaneously ascribe human attributes to the robot. Together, the findings enhance our understanding of pragmatic inferencing in aging.
Collapse
Affiliation(s)
- Raheleh Saryazdi
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Mississauga, Ontario, Canada.
| | - Joanne Nuque
- Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| | - Craig G Chambers
- Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| |
Collapse
|
11
|
Nitsan G, Banai K, Ben-David BM. One Size Does Not Fit All: Examining the Effects of Working Memory Capacity on Spoken Word Recognition in Older Adults Using Eye Tracking. Front Psychol 2022; 13:841466. [PMID: 35478743 PMCID: PMC9037998 DOI: 10.3389/fpsyg.2022.841466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than peers with lower capacity when speech is presented in noise, with another task performed in tandem. Using the Eye-tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL) an adapted version of the "visual world" paradigm, 36 older listeners were asked to follow spoken instructions presented in background noise, while retaining digits for later recall under low (single-digit) or high (four-digits) memory load. In critical trials, instructions (e.g., "point at the candle") directed listeners' gaze to pictures of objects whose names shared onset or offset sounds with the name of a competitor that was displayed on the screen at the same time (e.g., candy or sandal). We compared listeners with different memory capacities on the time course for spoken word recognition under the two memory loads by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results indicated two trends. (1) For older adults with lower working memory capacity, increased memory load did not affect online speech processing, however, it impaired offline word recognition accuracy. (2) The reverse pattern was observed for older adults with higher working memory capacity: increased task difficulty significantly decreases online speech processing efficiency but had no effect on offline word recognition accuracy. Results suggest that in older adults, adaptation to adverse listening conditions is at least partially supported by cognitive reserve. Therefore, additional cognitive capacity may lead to greater resilience of older listeners to adverse listening conditions. The differential effects documented by eye movements and accuracy highlight the importance of using both online and offline measures of speech processing to explore age-related changes in speech perception.
Collapse
Affiliation(s)
- Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M. Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|
12
|
Saryazdi R, Nuque J, Chambers CG. Linguistic Redundancy and its Effects on Younger and Older Adults' Real-Time Comprehension and Memory. Cogn Sci 2022; 46:e13123. [PMID: 35377508 DOI: 10.1111/cogs.13123] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 01/05/2022] [Accepted: 02/14/2022] [Indexed: 01/12/2023]
Abstract
Redundant modifiers can facilitate referential interpretation by narrowing attention to intended referents. This is intriguing because, on traditional accounts, redundancy should impair comprehension. Little is known, however, about the effects of redundancy on older adults' comprehension. Older adults may show different patterns due to age-related decline (e.g., processing speed and memory) or their greater proclivity for linguistic redundancy, as suggested in language production studies. The present study explores the effects of linguistic redundancy on younger and older listeners' incremental referential processing, judgments of informativity, and downstream memory performance. In an eye tracking task, gaze was monitored as listeners followed instructions from a social robot referring to a unique object within a multi-object display. Critical trials were varied in terms of modifier type ("…closed/purple/[NONE] umbrella") and whether displays contained another object matching target properties (closed purple notebook), making modifiers less effective at narrowing attention. Relative to unmodified descriptions, redundant color modifiers facilitated comprehension, particularly when they narrowed attention to a single referent. Descriptions with redundant state modifiers always impaired real-time comprehension. In contrast, memory measures showed faster recognition of objects previously described with redundant state modifiers. Although color and state descriptions had different effects on referential processing and memory, informativity judgments showed participants perceived them as informationally redundant to the same extent relative to unmodified descriptions. Importantly, the patterns did not differ by listener age. Together, the results show that the effects of linguistic redundancy are stable across adulthood but vary as a function of modifier type, visual context, and the measured phenomenon.
Collapse
Affiliation(s)
- Raheleh Saryazdi
- Department of Psychology, University of Toronto.,Department of Psychology, University of Toronto Mississauga
| | - Joanne Nuque
- Department of Psychology, University of Toronto Mississauga
| | - Craig G Chambers
- Department of Psychology, University of Toronto.,Department of Psychology, University of Toronto Mississauga
| |
Collapse
|
13
|
Chapman LR, Hallowell B. The Unfolding of Cognitive Effort During Sentence Processing: Pupillometric Evidence From People With and Without Aphasia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4900-4917. [PMID: 34763522 PMCID: PMC9150667 DOI: 10.1044/2021_jslhr-21-00129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/18/2021] [Accepted: 08/05/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Arousal and cognitive effort are relevant yet often overlooked components of attention during language processing. Pupillometry can be used to provide a psychophysiological index of arousal and cognitive effort. Given that much is unknown regarding the relationship between cognition and language deficits seen in people with aphasia (PWA), pupillometry may be uniquely suited to explore those relationships. The purpose of this study was to examine arousal and the time course of the allocation of cognitive effort related to sentence processing in people with and without aphasia. METHOD Nineteen PWA and age- and education-matched control participants listened to relatively easy (subject-relative) and relatively difficult (object-relative) sentences and were required to answer occasional comprehension questions. Tonic and phasic pupillary responses were used to index arousal and the unfolding of cognitive effort, respectively, while sentences were processed. Group differences in tonic and phasic responses were examined. RESULTS Group differences were observed for both tonic and phasic responses. PWA exhibited greater overall arousal throughout the task compared with controls, as evidenced by larger tonic pupil responses. Controls exhibited more effort (greater phasic responses) for difficult compared with easy sentences; PWA did not. Group differences in phasic responses were apparent during end-of-sentence and postsentence time windows. CONCLUSIONS Results indicate that the attentional state of PWA in this study was not consistently supportive of adequate task engagement. PWA in our sample may have relatively limited attentional capacity or may have challenges with allocating existing capacity in ways that support adequate task engagement and performance. This work adds to the body of evidence supporting the validity of pupillometric tasks for the study of aphasia and contributes to a better understanding of the nature of language deficits in aphasia. Supplemental Material https://doi.org/10.23641/asha.16959376.
Collapse
Affiliation(s)
- Laura Roche Chapman
- Department of Communication Sciences and Disorders, Appalachian State University, Boone, NC
| | | |
Collapse
|
14
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
15
|
Colby S, McMurray B. Cognitive and Physiological Measures of Listening Effort During Degraded Speech Perception: Relating Dual-Task and Pupillometry Paradigms. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3627-3652. [PMID: 34491779 PMCID: PMC8642090 DOI: 10.1044/2021_jslhr-20-00583] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/01/2021] [Accepted: 05/21/2021] [Indexed: 06/13/2023]
Abstract
Purpose Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n = 30). Results In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort. Supplemental Material https://doi.org/10.23641/asha.16455900.
Collapse
Affiliation(s)
- Sarah Colby
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| | - Bob McMurray
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| |
Collapse
|
16
|
Pupillometry reveals cognitive demands of lexical competition during spoken word recognition in young and older adults. Psychon Bull Rev 2021; 29:268-280. [PMID: 34405386 DOI: 10.3758/s13423-021-01991-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 12/27/2022]
Abstract
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.
Collapse
|
17
|
Mesik J, Ray L, Wojtczak M. Effects of Age on Cortical Tracking of Word-Level Features of Continuous Competing Speech. Front Neurosci 2021; 15:635126. [PMID: 33867920 PMCID: PMC8047075 DOI: 10.3389/fnins.2021.635126] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 03/12/2021] [Indexed: 01/17/2023] Open
Abstract
Speech-in-noise comprehension difficulties are common among the elderly population, yet traditional objective measures of speech perception are largely insensitive to this deficit, particularly in the absence of clinical hearing loss. In recent years, a growing body of research in young normal-hearing adults has demonstrated that high-level features related to speech semantics and lexical predictability elicit strong centro-parietal negativity in the EEG signal around 400 ms following the word onset. Here we investigate effects of age on cortical tracking of these word-level features within a two-talker speech mixture, and their relationship with self-reported difficulties with speech-in-noise understanding. While undergoing EEG recordings, younger and older adult participants listened to a continuous narrative story in the presence of a distractor story. We then utilized forward encoding models to estimate cortical tracking of four speech features: (1) word onsets, (2) "semantic" dissimilarity of each word relative to the preceding context, (3) lexical surprisal for each word, and (4) overall word audibility. Our results revealed robust tracking of all features for attended speech, with surprisal and word audibility showing significantly stronger contributions to neural activity than dissimilarity. Additionally, older adults exhibited significantly stronger tracking of word-level features than younger adults, especially over frontal electrode sites, potentially reflecting increased listening effort. Finally, neuro-behavioral analyses revealed trends of a negative relationship between subjective speech-in-noise perception difficulties and the model goodness-of-fit for attended speech, as well as a positive relationship between task performance and the goodness-of-fit, indicating behavioral relevance of these measures. Together, our results demonstrate the utility of modeling cortical responses to multi-talker speech using complex, word-level features and the potential for their use to study changes in speech processing due to aging and hearing loss.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | | | |
Collapse
|
18
|
Ayasse ND, Hodson AJ, Wingfield A. The Principle of Least Effort and Comprehension of Spoken Sentences by Younger and Older Adults. Front Psychol 2021; 12:629464. [PMID: 33796047 PMCID: PMC8007979 DOI: 10.3389/fpsyg.2021.629464] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 02/22/2021] [Indexed: 01/18/2023] Open
Abstract
There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (M age = 75.5 years) and 20 younger adults (M age = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.
Collapse
Affiliation(s)
| | | | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
19
|
James CJ, Graham PL, Betances Reinoso FA, Breuning SN, Durko M, Huarte Irujo A, Royo López J, Müller L, Perenyi A, Jaramillo Saffon R, Salinas Garcia S, Schüssler M, Schwarz Langer MJ, Skarzynski PH, Mecklenburg DJ. The Listening Network and Cochlear Implant Benefits in Hearing-Impaired Adults. Front Aging Neurosci 2021; 13:589296. [PMID: 33716706 PMCID: PMC7947658 DOI: 10.3389/fnagi.2021.589296] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 01/28/2021] [Indexed: 01/10/2023] Open
Abstract
Older adults with mild or no hearing loss make more errors and expend more effort listening to speech. Cochlear implants (CI) restore hearing to deaf patients but with limited fidelity. We hypothesized that patient-reported hearing and health-related quality of life in CI patients may similarly vary according to age. Speech Spatial Qualities (SSQ) of hearing scale and Health Utilities Index Mark III (HUI) questionnaires were administered to 543 unilaterally implanted adults across Europe, South Africa, and South America. Data were acquired before surgery and at 1, 2, and 3 years post-surgery. Data were analyzed using linear mixed models with visit, age group (18–34, 35–44, 45–54, 55–64, and 65+), and side of implant as main factors and adjusted for other covariates. Tinnitus and dizziness prevalence did not vary with age, but older groups had more preoperative hearing. Preoperatively and postoperatively, SSQ scores were significantly higher (Δ0.75–0.82) for those aged <45 compared with those 55+. However, gains in SSQ scores were equivalent across age groups, although postoperative SSQ scores were higher in right-ear implanted subjects. All age groups benefited equally in terms of HUI gain (0.18), with no decrease in scores with age. Overall, younger adults appeared to cope better with a degraded hearing before and after CI, leading to better subjective hearing performance.
Collapse
Affiliation(s)
| | - Petra L Graham
- Department of Mathematics and Statistics, Macquarie University, North Ryde, NSW, Australia
| | | | | | - Marcin Durko
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, Lodz, Poland
| | - Alicia Huarte Irujo
- Department of Otorhinolaryngology, Clínica Universidad de Navarra, Pamplona, Spain
| | - Juan Royo López
- Servicio de Otorrinolaringología, Hospital Clínico Universitario Lozano Blesa, Zaragoza, Spain
| | - Lida Müller
- Tygerberg Hospital-Stellenbosch University Cochlear Implant Unit, Tygerberg, South Africa
| | - Adam Perenyi
- Department of Otolaryngology and Head Neck Surgery, Albert Szent Györgyi Medical Center, University of Szeged, Szeged, Hungary
| | | | - Sandra Salinas Garcia
- Servicio de Otorrinolaringología y Patología Cérvico-Facial, Fundación Jiménez Díaz University Hospital, Madrid, Spain
| | - Mark Schüssler
- Deutsches HörZentrum Hannover der HNO-Klinik, Medizische Hochschule Hannover, Hannover, Germany
| | | | | | | |
Collapse
|
20
|
Harel-Arbeli T, Wingfield A, Palgi Y, Ben-David BM. Age-Related Differences in the Online Processing of Spoken Semantic Context and the Effect of Semantic Competition: Evidence From Eye Gaze. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:315-327. [PMID: 33561353 DOI: 10.1044/2020_jslhr-20-00142] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The study examined age-related differences in the use of semantic context and in the effect of semantic competition in spoken sentence processing. We used offline (response latency) and online (eye gaze) measures, using the "visual world" eye-tracking paradigm. Method Thirty younger and 30 older adults heard sentences related to one of four images presented on a computer monitor. They were asked to touch the image corresponding to the final word of the sentence (target word). Three conditions were used: a nonpredictive sentence, a predictive sentence suggesting one of the four images on the screen (semantic context), and a predictive sentence suggesting two possible images (semantic competition). Results Online eye gaze data showed no age-related differences with nonpredictive sentences, but revealed slowed processing for older adults when context was presented. With the addition of semantic competition to context, older adults were slower to look at the target word after it had been heard. In contrast, offline latency analysis did not show age-related differences in the effects of context and competition. As expected, older adults were generally slower to touch the image than younger adults. Conclusions Traditional offline measures were not able to reveal the complex effect of aging on spoken semantic context processing. Online eye gaze measures suggest that older adults were slower than younger adults to predict an indicated object based on semantic context. Semantic competition affected online processing for older adults more than for younger adults, with no accompanying age-related differences in latency. This supports an early age-related inhibition deficit, interfering with processing, and not necessarily with response execution.
Collapse
Affiliation(s)
- Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Israel
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Yuval Palgi
- Department of Gerontology, University of Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Networks, Ontario, Canada
| |
Collapse
|
21
|
Książek P, Zekveld AA, Wendt D, Fiedler L, Lunner T, Kramer SE. Effect of Speech-to-Noise Ratio and Luminance on a Range of Current and Potential Pupil Response Measures to Assess Listening Effort. Trends Hear 2021; 25:23312165211009351. [PMID: 33926329 PMCID: PMC8111552 DOI: 10.1177/23312165211009351] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 02/11/2021] [Accepted: 03/05/2021] [Indexed: 11/17/2022] Open
Abstract
In hearing research, pupillometry is an established method of studying listening effort. The focus of this study was to evaluate several pupil measures extracted from the Task-Evoked Pupil Responses (TEPRs) in speech-in-noise test. A range of analysis approaches was applied to extract these pupil measures, namely (a) pupil peak dilation (PPD); (b) mean pupil dilation (MPD); (c) index of pupillary activity; (d) growth curve analysis (GCA); and (e) principal component analysis (PCA). The effect of signal-to-noise ratio (SNR; Data Set A: -20 dB, -10 dB, +5 dB SNR) and luminance (Data Set B: 0.1 cd/m2, 360 cd/m2) on the TEPRs were investigated. Data Sets A and B were recorded during a speech-in-noise test and included TEPRs from 33 and 27 normal-hearing native Dutch speakers, respectively. The main results were as follows: (a) A significant effect of SNR was revealed for all pupil measures extracted in the time domain (PPD, MPD, GCA, PCA); (b) Two time series analysis approaches (GCA, PCA) provided modeled temporal profiles of TEPRs (GCA); and time windows spanning subtasks performed in a speech-in-noise test (PCA); and (c) All pupil measures revealed a significant effect of luminance. In conclusion, multiple pupil measures showed similar effects of SNR, suggesting that effort may be reflected in multiple aspects of TEPR. Moreover, a direct analysis of the pupil time course seems to provide a more holistic view of TEPRs, yet further research is needed to understand and interpret its measures. Further research is also required to find pupil measures less sensitive to changes in luminance.
Collapse
Affiliation(s)
- Patrycja Książek
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A. Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | | | | | - Sophia E. Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| |
Collapse
|
22
|
Abstract
OBJECTIVES Serial recall of digits is frequently used to measure short-term memory span in various listening conditions. However, the use of digits may mask the effect of low quality auditory input. Digits have high frequency and are phonologically distinct relative to one another, so they should be easy to identify even with low quality auditory input. In contrast, larger item sets reduce listener ability to strategically constrain their expectations, which should reduce identification accuracy and increase the time and/or cognitive resources needed for identification when auditory quality is low. This diminished accuracy and increased cognitive load should interfere with memory for sequences of items drawn from large sets. The goal of this work was to determine whether this predicted interaction between auditory quality and stimulus set in short-term memory exists, and if so, whether this interaction is associated with processing speed, vocabulary, or attention. DESIGN We compared immediate serial recall within young adults with normal hearing across unprocessed and vocoded listening conditions for multiple stimulus sets. Stimulus sets were lists of digits (1 to 9), consonant-vowel-consonant (CVC) words (chosen from a list of 60 words), and CVC nonwords (chosen from a list of 50 nonwords). Stimuli were unprocessed or vocoded with an eight-channel noise vocoder. To support interpretation of responses, words and nonwords were selected to minimize inclusion of multiple phonemes from within a confusion cluster. We also measured receptive vocabulary (Peabody Picture Vocabulary Test [PPVT-4]), sustained attention (test of variables of attention [TOVA]), and repetition speed for individual items from each stimulus set under both listening conditions. RESULTS Vocoding stimuli had no impact on serial recall of digits, but reduced memory span for words and nonwords. This reduction in memory span was attributed to an increase in phonological confusions for nonwords. However, memory span for vocoded word lists remained reduced even after accounting for common phonetic confusions, indicating that lexical status played an additional role across listening conditions. Principal components analysis found two components that explained 84% of the variance in memory span across conditions. Component one had similar load across all conditions, indicating that participants had an underlying memory capacity, which was common to all conditions. Component two was loaded by performance in the vocoded word and nonword conditions, representing the sensitivity of memory span to vocoding of these stimuli. The order in which participants completed listening conditions had a small effect on memory span that could not account for the effect of listening condition. Repetition speed was fastest for digits, slower for words, and slowest for nonwords. On average, vocoding slowed repetition speed for all stimuli, but repetition speed was not predictive of individual memory span. Vocabulary and attention showed no correlation with memory span. CONCLUSIONS Our results replicated previous findings that low quality auditory input can impair short-term memory, and demonstrated that this impairment is sensitive to stimulus set. Using multiple stimulus sets in degraded listening conditions can isolate memory capacity (in digit span) from impaired item identification (in word and nonword span), which may help characterize the relationship between memory and speech recognition in difficult listening conditions.
Collapse
Affiliation(s)
- Adam K. Bosen
- Boys Town National Research Hospital, Omaha, NE, USA
| | | |
Collapse
|
23
|
Ayasse ND, Wingfield A. The Two Sides of Linguistic Context: Eye-Tracking as a Measure of Semantic Competition in Spoken Word Recognition Among Younger and Older Adults. Front Hum Neurosci 2020; 14:132. [PMID: 32327987 PMCID: PMC7161414 DOI: 10.3389/fnhum.2020.00132] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 03/20/2020] [Indexed: 12/17/2022] Open
Abstract
Studies of spoken word recognition have reliably shown that both younger and older adults' recognition of acoustically degraded words is facilitated by the presence of a linguistic context. Against this benefit, older adults' word recognition can be differentially hampered by interference from other words that could also fit the context. These prior studies have primarily used off-line response measures such as the signal-to-noise ratio needed for a target word to be correctly identified. Less clear is the locus of these effects; whether facilitation and interference have their influence primarily during response selection, or whether their effects begin to operate even before a sentence-final target word has been uttered. This question was addressed by tracking 20 younger and 20 older adults' eye fixations on a visually presented target word that corresponded to the final word of a contextually constraining or neutral sentence, accompanied by a second word on the computer screen that in some cases could also fit the sentence context. Growth curve analysis of the time-course of eye-gaze on a target word showed facilitation and inhibition effects begin to appear even as a spoken sentence is unfolding in time. Consistent with an age-related inhibition deficit, older adults' word recognition was slowed by the presence of a semantic competitor to a degree not observed for younger adults, with this effect operating early in the recognition process.
Collapse
Affiliation(s)
- Nicolai D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
24
|
Vanheusden FJ, Kegler M, Ireland K, Georga C, Simpson DM, Reichenbach T, Bell SL. Hearing Aids Do Not Alter Cortical Entrainment to Speech at Audible Levels in Mild-to-Moderately Hearing-Impaired Subjects. Front Hum Neurosci 2020; 14:109. [PMID: 32317951 PMCID: PMC7147120 DOI: 10.3389/fnhum.2020.00109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 03/11/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cortical entrainment to speech correlates with speech intelligibility and attention to a speech stream in noisy environments. However, there is a lack of data on whether cortical entrainment can help in evaluating hearing aid fittings for subjects with mild to moderate hearing loss. One particular problem that may arise is that hearing aids may alter the speech stimulus during (pre-)processing steps, which might alter cortical entrainment to the speech. Here, the effect of hearing aid processing on cortical entrainment to running speech in hearing impaired subjects was investigated. METHODOLOGY Seventeen native English-speaking subjects with mild-to-moderate hearing loss participated in the study. Hearing function and hearing aid fitting were evaluated using standard clinical procedures. Participants then listened to a 25-min audiobook under aided and unaided conditions at 70 dBA sound pressure level (SPL) in quiet conditions. EEG data were collected using a 32-channel system. Cortical entrainment to speech was evaluated using decoders reconstructing the speech envelope from the EEG data. Null decoders, obtained from EEG and the time-reversed speech envelope, were used to assess the chance level reconstructions. Entrainment in the delta- (1-4 Hz) and theta- (4-8 Hz) band, as well as wideband (1-20 Hz) EEG data was investigated. RESULTS Significant cortical responses could be detected for all but one subject in all three frequency bands under both aided and unaided conditions. However, no significant differences could be found between the two conditions in the number of responses detected, nor in the strength of cortical entrainment. The results show that the relatively small change in speech input provided by the hearing aid was not sufficient to elicit a detectable change in cortical entrainment. CONCLUSION For subjects with mild to moderate hearing loss, cortical entrainment to speech in quiet at an audible level is not affected by hearing aids. These results clear the pathway for exploring the potential to use cortical entrainment to running speech for evaluating hearing aid fitting at lower speech intensities (which could be inaudible when unaided), or using speech in noise conditions.
Collapse
Affiliation(s)
- Frederique J. Vanheusden
- Department of Engineering, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Katie Ireland
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Constantina Georga
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - David M. Simpson
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Steven L. Bell
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
25
|
Ayasse ND, Wingfield A. Anticipatory Baseline Pupil Diameter Is Sensitive to Differences in Hearing Thresholds. Front Psychol 2020; 10:2947. [PMID: 31998196 PMCID: PMC6965006 DOI: 10.3389/fpsyg.2019.02947] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 12/12/2019] [Indexed: 12/23/2022] Open
Abstract
Task-evoked changes in pupil dilation have long been used as a physiological index of cognitive effort. Unlike this response, that is measured during or after an experimental trial, the baseline pupil dilation (BPD) is a measure taken prior to an experimental trial. As such, it is considered to reflect an individual’s arousal level in anticipation of an experimental trial. We report data for 68 participants, ages 18 to 89, whose hearing acuity ranged from normal hearing to a moderate hearing loss, tested over a series 160 trials on an auditory sentence comprehension task. Results showed that BPDs progressively declined over the course of the experimental trials, with participants with poorer pure tone detection thresholds showing a steeper rate of decline than those with better thresholds. Data showed this slope difference to be due to participants with poorer hearing having larger BPDs than those with better hearing at the start of the experiment, but with their BPDs approaching that of the better hearing participants by the end of the 160 trials. A finding of increasing response accuracy over trials was seen as inconsistent with a fatigue or reduced task engagement account of the diminishing BPDs. Rather, the present results imply BPD as reflecting a heightened arousal level in poorer-hearing participants in anticipation of a task that demands accurate speech perception, a concern that dissipates over trials with task success. These data taken with others suggest that the baseline pupillary response may not reflect a single construct.
Collapse
Affiliation(s)
- Nicolai D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
26
|
Abstract
Recent applications of eye tracking for diagnosis, prognosis and follow-up of therapy in age-related neurological or psychological deficits have been reviewed. The review is focused on active aging, neurodegeneration and cognitive impairments. The potential impacts and current limitations of using characterizing features of eye movements and pupillary responses (oculometrics) as objective biomarkers in the context of aging are discussed. A closer look into the findings, especially with respect to cognitive impairments, suggests that eye tracking is an invaluable technique to study hidden aspects of aging that have not been revealed using any other noninvasive tool. Future research should involve a wider variety of oculometrics, in addition to saccadic metrics and pupillary responses, including nonlinear and combinatorial features as well as blink- and fixation-related metrics to develop biomarkers to trace age-related irregularities associated with cognitive and neural deficits.
Collapse
Affiliation(s)
- Ramtin Z Marandi
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| | - Parisa Gazerani
- Department of Health Science & Technology, Aalborg University, Aalborg E 9220, Denmark
| |
Collapse
|
27
|
Nitsan G, Wingfield A, Lavie L, Ben-David BM. Differences in Working Memory Capacity Affect Online Spoken Word Recognition: Evidence From Eye Movements. Trends Hear 2019; 23:2331216519839624. [PMID: 31010398 PMCID: PMC6480998 DOI: 10.1177/2331216519839624] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Individual differences in working memory capacity have been gaining recognition as playing an important role in speech comprehension, especially in noisy environments. Using the visual world eye-tracking paradigm, a recent study by Hadar and coworkers found that online spoken word recognition was slowed when listeners were required to retain in memory a list of four spoken digits (high load) compared with only one (low load). In the current study, we recognized that the influence of a digit preload might be greater for individuals who have a more limited memory span. We compared participants with higher and lower memory spans on the time course for spoken word recognition by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results show that when a low load was imposed, differences in memory span had no effect on the time course of preferential fixations. However, with a high load, listeners with lower span were delayed by ∼550 ms in discriminating target from sound-sharing competitors, relative to higher span listeners. This follows an assumption that the interference effect of a memory preload is not a fixed value, but rather, its effect is greater for individuals with a smaller memory span. Interestingly, span differences affected the timeline for spoken word recognition in noise, but not offline accuracy. This highlights the significance of using eye-tracking as a measure for online speech processing. Results further emphasize the importance of considering differences in cognitive capacity, even when testing normal hearing young adults.
Collapse
Affiliation(s)
- Gal Nitsan
- 1 Department of Communication Sciences and Disorders, University of Haifa, Israel.,2 Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel
| | - Arthur Wingfield
- 3 Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Limor Lavie
- 1 Department of Communication Sciences and Disorders, University of Haifa, Israel
| | - Boaz M Ben-David
- 2 Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Israel.,4 Department of Speech-Language Pathology, University of Toronto, ON, Canada.,5 Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|
28
|
Ayasse ND, Penn LR, Wingfield A. Variations Within Normal Hearing Acuity and Speech Comprehension: An Exploratory Study. Am J Audiol 2019; 28:369-375. [PMID: 31091111 DOI: 10.1044/2019_aja-18-0173] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Many young adults with a mild hearing loss can appear unaware or unconcerned about their loss or its potential effects. A question that has not been raised in prior research is whether slight variability, even within the range of clinically normal hearing, may have a detrimental effect on comprehension of spoken sentences, especially when attempting to understand the meaning of sentences that offer an additional cognitive challenge. The purpose of this study was to address this question. Method An exploratory analysis was conducted on data from 3 published studies that included young adults, ages 18 to 29 years, with audiometrically normal hearing acuity (pure-tone average < 15 dB HL) tested for comprehension of sentences that conveyed the sentence meaning with simpler or more complex linguistic structures. A product-moment correlation was conducted between individuals' hearing acuity and their comprehension accuracy. Results A significant correlation appeared between hearing acuity and comprehension accuracy for syntactically complex sentences, but not for sentences with a simpler syntactic structure. Partial correlations confirmed this relationship to hold independent of participant age within this relatively narrow age range. Conclusion These findings suggest that slight elevations in hearing thresholds, even among young adults who pass a screen for normal hearing, can affect comprehension accuracy for spoken sentences when combined with cognitive demands imposed by sentences that convey their meaning with a complex linguistic structure. These findings support limited resource models of attentional allocation and argue for routine baseline hearing evaluations of young adults with current age-normal hearing acuity.
Collapse
Affiliation(s)
- Nicole D. Ayasse
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Lana R. Penn
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| | - Arthur Wingfield
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA
| |
Collapse
|
29
|
Parthasarathy A, Bartlett EL, Kujawa SG. Age-related Changes in Neural Coding of Envelope Cues: Peripheral Declines and Central Compensation. Neuroscience 2019; 407:21-31. [DOI: 10.1016/j.neuroscience.2018.12.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 11/30/2018] [Accepted: 12/03/2018] [Indexed: 12/22/2022]
|
30
|
Koelewijn T, van Haastrecht JAP, Kramer SE. Pupil Responses of Adults With Traumatic Brain Injury During Processing of Speech in Noise. Trends Hear 2019; 22:2331216518811444. [PMID: 30482105 PMCID: PMC6277755 DOI: 10.1177/2331216518811444] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Previous research has shown the effects of task demands on pupil responses in both normal hearing (NH) and hearing impaired (HI) adults. One consistent finding is that HI listeners have smaller pupil dilations at low levels of speech recognition performance (≤50%). This study aimed to examine the pupil dilation in adults with a normal pure-tone audiogram who experience serious difficulties when processing speech-in-noise. Hence, 20 adults, aged 26 to 62 years, with traumatic brain injury (TBI) or cerebrovascular accident (CVA) but with a normal audiogram participated. Their pupil size was recorded while they listened to sentences masked by fluctuating noise or interfering speech at 50% and 84% intelligibility. In each condition, participants rated their perceived performance, effort, and task persistence. In addition, participants performed the text reception threshold task—a visual sentence completion task—that measured language-related processing. Data were compared with those of age-matched NH and HI participants with no neurological problems obtained in earlier studies using the same setup and design. The TBI group had the same pure-tone audiogram and text reception threshold scores as the NH listeners, yet their speech reception thresholds were significantly worse. Although the pupil dilation responses on average did not differ between groups, self-rated effort scores were highest in the TBI group. Results of a correlation analyses showed that TBI participants with worse speech reception thresholds had a smaller pupil response. We speculate that increased distractibility or fatigue affected the ability of TBI participants to allocate effort during speech perception in noise.
Collapse
Affiliation(s)
- Thomas Koelewijn
- 1 Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - José A P van Haastrecht
- 1 Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Sophia E Kramer
- 1 Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck surgery, Ear & Hearing, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| |
Collapse
|
31
|
Ayasse ND, Wingfield A. A Tipping Point in Listening Effort: Effects of Linguistic Complexity and Age-Related Hearing Loss on Sentence Comprehension. Trends Hear 2019; 22:2331216518790907. [PMID: 30235973 PMCID: PMC6154259 DOI: 10.1177/2331216518790907] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In recent years, there has been a growing interest in the relationship between effort and performance. Early formulations implied that, as the challenge of a task increases, individuals will exert more effort, with resultant maintenance of stable performance. We report an experiment in which normal-hearing young adults, normal-hearing older adults, and older adults with age-related mild-to-moderate hearing loss were tested for comprehension of recorded sentences that varied the comprehension challenge in two ways. First, sentences were constructed that expressed their meaning either with a simpler subject-relative syntactic structure or a more computationally demanding object-relative structure. Second, for each sentence type, an adjectival phrase was inserted that created either a short or long gap in the sentence between the agent performing an action and the action being performed. The measurement of pupil dilation as an index of processing effort showed effort to increase with task difficulty until a difficulty tipping point was reached. Beyond this point, the measurement of pupil size revealed a commitment of effort by the two groups of older adults who failed to keep pace with task demands as evidenced by reduced comprehension accuracy. We take these pupillometry data as revealing a complex relationship between task difficulty, effort, and performance that might not otherwise appear from task performance alone.
Collapse
Affiliation(s)
- Nicole D Ayasse
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Arthur Wingfield
- 1 Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
32
|
Zekveld AA, Koelewijn T, Kramer SE. The Pupil Dilation Response to Auditory Stimuli: Current State of Knowledge. Trends Hear 2019; 22:2331216518777174. [PMID: 30249172 PMCID: PMC6156203 DOI: 10.1177/2331216518777174] [Citation(s) in RCA: 149] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
The measurement of cognitive resource allocation during listening, or listening effort, provides valuable insight in the factors influencing auditory processing. In recent years, many studies inside and outside the field of hearing science have measured the pupil response evoked by auditory stimuli. The aim of the current review was to provide an exhaustive overview of these studies. The 146 studies included in this review originated from multiple domains, including hearing science and linguistics, but the review also covers research into motivation, memory, and emotion. The present review provides a unique overview of these studies and is organized according to the components of the Framework for Understanding Effortful Listening. A summary table presents the sample characteristics, an outline of the study design, stimuli, the pupil parameters analyzed, and the main findings of each study. The results indicate that the pupil response is sensitive to various task manipulations as well as interindividual differences. Many of the findings have been replicated. Frequent interactions between the independent factors affecting the pupil response have been reported, which indicates complex processes underlying cognitive resource allocation. This complexity should be taken into account in future studies that should focus more on interindividual differences, also including older participants. This review facilitates the careful design of new studies by indicating the factors that should be controlled for. In conclusion, measuring the pupil dilation response to auditory stimuli has been demonstrated to be sensitive method applicable to numerous research questions. The sensitivity of the measure calls for carefully designed stimuli.
Collapse
Affiliation(s)
- Adriana A Zekveld
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands.,2 Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Sweden.,3 Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Thomas Koelewijn
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Sophia E Kramer
- 1 Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| |
Collapse
|
33
|
Wagner AE, Nagels L, Toffanin P, Opie JM, Başkent D. Individual Variations in Effort: Assessing Pupillometry for the Hearing Impaired. Trends Hear 2019; 23:2331216519845596. [PMID: 31131729 PMCID: PMC6537294 DOI: 10.1177/2331216519845596] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 03/19/2019] [Accepted: 03/25/2019] [Indexed: 12/20/2022] Open
Abstract
Assessing effort in speech comprehension for hearing-impaired (HI) listeners is important, as effortful processing of speech can limit their hearing rehabilitation. We examined the measure of pupil dilation in its capacity to accommodate the heterogeneity that is present within clinical populations by studying lexical access in users with sensorineural hearing loss, who perceive speech via cochlear implants (CIs). We compared the pupillary responses of 15 experienced CI users and 14 age-matched normal-hearing (NH) controls during auditory lexical decision. A growth curve analysis was applied to compare the responses between the groups. NH listeners showed a coherent pattern of pupil dilation that reflects the task demands of the experimental manipulation and a homogenous time course of dilation. CI listeners showed more variability in the morphology of pupil dilation curves, potentially reflecting variable sources of effort across individuals. In follow-up analyses, we examined how speech perception, a task that relies on multiple stages of perceptual analyses, poses multiple sources of increased effort for HI listeners, wherefore we might not be measuring the same source of effort for HI as for NH listeners. We argue that interindividual variability among HI listeners can be clinically meaningful in attesting not only the magnitude but also the locus of increased effort. The understanding of individual variations in effort requires experimental paradigms that (a) differentiate the task demands during speech comprehension, (b) capture pupil dilation in its time course per individual listeners, and (c) investigate the range of individual variability present within clinical and NH populations.
Collapse
Affiliation(s)
- Anita E. Wagner
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| | - Leanne Nagels
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Center for Language and Cognition
Groningen, University of Groningen, the Netherlands
| | - Paolo Toffanin
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
| | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head
and Neck Surgery, University Medical Center Groningen, University of Groningen, the
Netherlands
- Graduate School of Medical Sciences,
School of Behavioral and Cognitive Neuroscience, University of Groningen, the
Netherlands
| |
Collapse
|
34
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
35
|
Van Engen KJ, McLaughlin DJ. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition. Hear Res 2018; 369:56-66. [PMID: 29801981 DOI: 10.1016/j.heares.2018.04.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 04/12/2018] [Accepted: 04/25/2018] [Indexed: 11/16/2022]
Abstract
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.
Collapse
|
36
|
Winn MB, Wendt D, Koelewijn T, Kuchinsky SE. Best Practices and Advice for Using Pupillometry to Measure Listening Effort: An Introduction for Those Who Want to Get Started. Trends Hear 2018; 22:2331216518800869. [PMID: 30261825 PMCID: PMC6166306 DOI: 10.1177/2331216518800869] [Citation(s) in RCA: 136] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 08/07/2018] [Accepted: 08/14/2018] [Indexed: 01/12/2023] Open
Abstract
Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze. This article contains the basic information and considerations needed to plan, set up, and interpret a pupillometry experiment, as well as commentary about how to interpret the response. Included are practicalities like minimal system requirements for recording a pupil response and specifications for peripheral, equipment, experiment logistics and constraints, and different kinds of data processing. Additional details include participant inclusion and exclusion criteria and some methodological considerations that might not be necessary in other auditory experiments. We discuss what data should be recorded and how to monitor the data quality during recording in order to minimize artifacts. Data processing and analysis are considered as well. Finally, we share insights from the collective experience of the authors and discuss some of the challenges that still lie ahead.
Collapse
Affiliation(s)
- Matthew B. Winn
- Speech-Language-Hearing Sciences,
University
of Minnesota, Minneapolis, MN, USA
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten,
Denmark
- Hearing Systems, Department of
Electrical Engineering, Technical University of Denmark, Kongens Lyngby,
Denmark
| | - Thomas Koelewijn
- Section Ear & Hearing, Department of
Otolaryngology–Head and Neck Surgery, Amsterdam Public Health Research Institute, VU
University Medical Center, the Netherlands
| | - Stefanie E. Kuchinsky
- National Military Audiology and Speech
Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD,
USA
| |
Collapse
|
37
|
Best V, Keidser G, Freeston K, Buchholz JM. Evaluation of the NAL Dynamic Conversations Test in older listeners with hearing loss. Int J Audiol 2017; 57:221-229. [PMID: 28826285 DOI: 10.1080/14992027.2017.1365275] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE The National Acoustic Laboratories Dynamic Conversations Test (NAL-DCT) is a new test of speech comprehension that incorporates a realistic environment and dynamic speech materials that capture certain features of everyday conversations. The goal of this study was to assess the suitability of the test for studying the consequences of hearing loss and amplification in older listeners. DESIGN Unaided and aided comprehension scores were measured for single-, two- and three-talker passages, along with unaided and aided sentence recall. To characterise the relevant cognitive abilities of the group, measures of short-term working memory, verbal information-processing speed and reading comprehension speed were collected. STUDY SAMPLE Participants were 41 older listeners with varying degrees of hearing loss. RESULTS Performance on both the NAL-DCT and the sentence test was strongly driven by hearing loss, but performance on the NAL-DCT was additionally related to a composite cognitive deficit score. Benefits of amplification were measurable but influenced by individual test SNRs. CONCLUSIONS The NAL-DCT is sensitive to the same factors as a traditional sentence recall test, but in addition is sensitive to the cognitive factors required for speech processing. The test shows promise as a tool for research concerned with real-world listening.
Collapse
Affiliation(s)
- Virginia Best
- a Department of Speech, Language and Hearing Sciences , Boston University , Boston , MA , USA
| | - Gitte Keidser
- b National Acoustic Laboratories , Sydney , NSW , Australia , and
| | - Katrina Freeston
- b National Acoustic Laboratories , Sydney , NSW , Australia , and
| | - Jörg M Buchholz
- b National Acoustic Laboratories , Sydney , NSW , Australia , and.,c Department of Audiology , Macquarie University , Sydney , NSW , Australia
| |
Collapse
|