51
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
52
|
Bhandari P, Demberg V, Kray J. Semantic Predictability Facilitates Comprehension of Degraded Speech in a Graded Manner. Front Psychol 2021; 12:714485. [PMID: 34566795 PMCID: PMC8459870 DOI: 10.3389/fpsyg.2021.714485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/06/2021] [Indexed: 01/02/2023] Open
Abstract
Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.
Collapse
Affiliation(s)
- Pratik Bhandari
- Department of Psychology, Saarland University, Saarbrücken, Germany
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
53
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|
54
|
Jafari Z, Kolb BE, Mohajerani MH. Age-related hearing loss and cognitive decline: MRI and cellular evidence. Ann N Y Acad Sci 2021; 1500:17-33. [PMID: 34114212 DOI: 10.1111/nyas.14617] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 04/30/2021] [Accepted: 05/07/2021] [Indexed: 12/16/2022]
Abstract
Extensive evidence supports the association between age-related hearing loss (ARHL) and cognitive decline. It is, however, unknown whether a causal relationship exists between these two, or whether they both result from shared mechanisms. This paper intends to study this relationship through a comprehensive review of MRI findings as well as evidence of cellular alterations. Our review of structural MRI studies demonstrates that ARHL is independently linked to accelerated atrophy of total and regional brain volumes and reduced white matter integrity. Resting-state and task-based fMRI studies on ARHL also show changes in spontaneous neural activity and brain functional connectivity; and alterations in brain areas supporting auditory, language, cognitive, and affective processing independent of age, respectively. Although MRI findings support a causal relationship between ARHL and cognitive decline, the contribution of potential shared mechanisms should also be considered. In this regard, the review of cellular evidence indicates their role as possible common mechanisms underlying both age-related changes in hearing and cognition. Considering existing evidence, no single hypothesis can explain the link between ARHL and cognitive decline, and the contribution of both causal (i.e., the sensory hypothesis) and shared (i.e., the common cause hypothesis) mechanisms is expected.
Collapse
Affiliation(s)
- Zahra Jafari
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
55
|
Murai SA, Riquimaroux H. Neural correlates of subjective comprehension of noise-vocoded speech. Hear Res 2021; 405:108249. [PMID: 33894680 DOI: 10.1016/j.heares.2021.108249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 03/28/2021] [Accepted: 04/06/2021] [Indexed: 10/21/2022]
Abstract
Under an acoustically degraded condition, the degree of speech comprehension fluctuates within individuals. Understanding the relationship between such fluctuations in comprehension and neural responses might reveal perceptual processing for distorted speech. In this study we investigated the cerebral activity associated with the degree of subjective comprehension of noise-vocoded speech sounds (NVSS) using functional magnetic resonance imaging. Our results indicate that higher comprehension of NVSS sentences was associated with greater activation in the right superior temporal cortex, and that activity in the left inferior frontal gyrus (Broca's area) was increased when a listener recognized words in a sentence they did not fully comprehend. In addition, results of laterality analysis demonstrated that recognition of words in an NVSS sentence led to less lateralized responses in the temporal cortex, though a left-lateralization was observed when no words were recognized. The data suggest that variation in comprehension within individuals can be associated with changes in lateralization in the temporal auditory cortex.
Collapse
Affiliation(s)
- Shota A Murai
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan
| | - Hiroshi Riquimaroux
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan.
| |
Collapse
|
56
|
Kadem M, Herrmann B, Rodd JM, Johnsrude IS. Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation. Trends Hear 2021; 24:2331216520964068. [PMID: 33124518 PMCID: PMC7607724 DOI: 10.1177/2331216520964068] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background.
Collapse
Affiliation(s)
- Mason Kadem
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Biomedical Engineering, McMaster University, Hamilton, Ontario, Canada
| | - Björn Herrmann
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jennifer M Rodd
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Ingrid S Johnsrude
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
57
|
Ayasse ND, Hodson AJ, Wingfield A. The Principle of Least Effort and Comprehension of Spoken Sentences by Younger and Older Adults. Front Psychol 2021; 12:629464. [PMID: 33796047 PMCID: PMC8007979 DOI: 10.3389/fpsyg.2021.629464] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 02/22/2021] [Indexed: 01/18/2023] Open
Abstract
There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (M age = 75.5 years) and 20 younger adults (M age = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.
Collapse
Affiliation(s)
| | | | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
58
|
Zhang Y, Lehmann A, Deroche M. Disentangling listening effort and memory load beyond behavioural evidence: Pupillary response to listening effort during a concurrent memory task. PLoS One 2021; 16:e0233251. [PMID: 33657100 PMCID: PMC7928507 DOI: 10.1371/journal.pone.0233251] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 02/15/2021] [Indexed: 11/18/2022] Open
Abstract
Recent research has demonstrated that pupillometry is a robust measure for quantifying listening effort. However, pupillary responses in listening situations where multiple cognitive functions are engaged and sustained over a period of time remain hard to interpret. This limits our conceptualisation and understanding of listening effort in realistic situations, because rarely in everyday life are people challenged by one task at a time. Therefore, the purpose of this experiment was to reveal the dynamics of listening effort in a sustained listening condition using a word repeat and recall task. Words were presented in quiet and speech-shaped noise at different signal-to-noise ratios (SNR): 0dB, 7dB, 14dB and quiet. Participants were presented with lists of 10 words, and required to repeat each word after its presentation. At the end of the list, participants either recalled as many words as possible or moved on to the next list. Simultaneously, their pupil dilation was recorded throughout the whole experiment. When only word repeating was required, peak pupil dilation (PPD) was bigger in 0dB versus other conditions; whereas when recall was required, PPD showed no difference among SNR levels and PPD in 0dB was smaller than repeat-only condition. Baseline pupil diameter and PPD followed different variation patterns across the 10 serial positions within a block for conditions requiring recall: baseline pupil diameter built up progressively and plateaued in the later positions (but shot up when listeners were recalling the previously heard words from memory); PPD decreased at a pace quicker than in repeat-only condition. The current findings demonstrate that additional cognitive load during a speech intelligibility task could disturb the well-established relation between pupillary response and listening effort. Both the magnitude and temporal pattern of task-evoked pupillary response differ greatly in complex listening conditions, urging for more listening effort studies in complex and realistic listening situations.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- * E-mail:
| | - Alexandre Lehmann
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
| | - Mickael Deroche
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
59
|
Holmes E, Zeidman P, Friston KJ, Griffiths TD. Difficulties with Speech-in-Noise Perception Related to Fundamental Grouping Processes in Auditory Cortex. Cereb Cortex 2021; 31:1582-1596. [PMID: 33136138 PMCID: PMC7869094 DOI: 10.1093/cercor/bhaa311] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 08/04/2020] [Accepted: 09/22/2020] [Indexed: 01/05/2023] Open
Abstract
In our everyday lives, we are often required to follow a conversation when background noise is present ("speech-in-noise" [SPIN] perception). SPIN perception varies widely-and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)-consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks-which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Peter Zeidman
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
- Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
60
|
Holmes E, Utoomprurkporn N, Hoskote C, Warren JD, Bamiou DE, Griffiths TD. Simultaneous auditory agnosia: Systematic description of a new type of auditory segregation deficit following a right hemisphere lesion. Cortex 2021; 135:92-107. [PMID: 33360763 PMCID: PMC7856551 DOI: 10.1016/j.cortex.2020.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 09/17/2020] [Accepted: 10/22/2020] [Indexed: 11/27/2022]
Abstract
We investigated auditory processing in a young patient who experienced a single embolus causing an infarct in the right middle cerebral artery territory. This led to damage to auditory cortex including planum temporale that spared medial Heschl's gyrus, and included damage to the posterior insula and inferior parietal lobule. She reported chronic difficulties with segregating speech from noise and segregating elements of music. Clinical tests showed no evidence for abnormal cochlear function. Follow-up tests confirmed difficulties with auditory segregation in her left ear that spanned multiple domains, including words-in-noise and music streaming. Testing with a stochastic figure-ground task-a way of estimating generic acoustic foreground and background segregation-demonstrated that this was also abnormal. This is the first demonstration of an acquired deficit in the segregation of complex acoustic patterns due to cortical damage, which we argue is a causal explanation for the symptomatic deficits in the segregation of speech and music. These symptoms are analogous to the visual symptom of simultaneous agnosia. Consistent with functional imaging studies on normal listeners, the work implicates non-primary auditory cortex. Further, the work demonstrates a (partial) lateralisation of the necessary anatomical substrate for segregation that has not been previously highlighted.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL, London, UK.
| | - Nattawan Utoomprurkporn
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK; Faculty of Medicine, Chulalongkorn University, King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Chandrashekar Hoskote
- Lysholm Department of Neuroradiology, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | | | - Doris-Eva Bamiou
- UCL Ear Institute, UCL, London, UK; NIHR University College London Hospitals Biomedical Research Centre, University College London Hospitals NHS Foundation Trust, UCL, London, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL, London, UK; Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
61
|
Choi HG, Hong SK, Lee HJ, Chang J. Acute Alcohol Intake Deteriorates Hearing Thresholds and Speech Perception in Noise. Audiol Neurootol 2020; 26:218-225. [PMID: 33341812 DOI: 10.1159/000510694] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 08/05/2020] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The hearing process involves complex peripheral and central auditory pathways and could be influenced by various situations or medications. To date, there is very little known about the effects of alcohol on the auditory performances. The purpose of the present study was to evaluate how acute alcohol administration affects various aspects of hearing performance in human subjects, from the auditory perceptive threshold to the speech-in-noise task, which is cognitively demanding. METHODS A total of 43 healthy volunteers were recruited, and each of the participants received calculated amounts of alcohol according to their body weight and sex with a targeted blood alcohol content level of 0.05% using the Widmark formula. Hearing was tested in alcohol-free conditions (no alcohol intake within the previous 24 h) and acute alcohol conditions. A test battery composed of pure-tone audiometry, speech reception threshold (SRT), word recognition score (WRS), distortion product otoacoustic emission (DPOAE), gaps-in-noise (GIN) test, and Korean matrix sentence test (testing speech perception in noise) was performed in the 2 conditions. RESULTS Acute alcohol intake elevated pure-tone hearing thresholds and SRT but did not affect WRS. Both otoacoustic emissions recorded with DPOAE and the temporal resolution measured with the GIN test were not influenced by alcohol intake. The hearing performance in a noisy environment in both easy (-2 dB signal-to-noise ratio [SNR]) and difficult (-8 dB SNR) conditions was decreased by alcohol. CONCLUSIONS Acute alcohol elevated auditory perceptive thresholds and affected performance in complex and difficult auditory tasks rather than simple tasks.
Collapse
Affiliation(s)
- Hyo Geun Choi
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Sung Kwang Hong
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea, .,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea,
| | - Jiwon Chang
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
| |
Collapse
|
62
|
Griffiths TD, Lad M, Kumar S, Holmes E, McMurray B, Maguire EA, Billig AJ, Sedley W. How Can Hearing Loss Cause Dementia? Neuron 2020; 108:401-412. [PMID: 32871106 PMCID: PMC7664986 DOI: 10.1016/j.neuron.2020.08.003] [Citation(s) in RCA: 198] [Impact Index Per Article: 39.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/31/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
Abstract
Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Meher Lad
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Sukhbinder Kumar
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, IA 52242, USA
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | | | - William Sedley
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
63
|
Herrmann B, Johnsrude IS. Absorption and Enjoyment During Listening to Acoustically Masked Stories. Trends Hear 2020; 24:2331216520967850. [PMID: 33143565 DOI: 10.1177/2331216520967850] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Comprehension of speech masked by background sound requires increased cognitive processing, which makes listening effortful. Research in hearing has focused on such challenging listening experiences, in part because they are thought to contribute to social withdrawal in people with hearing impairment. Research has focused less on positive listening experiences, such as enjoyment, despite their potential importance in motivating effortful listening. Moreover, the artificial speech materials-such as disconnected, brief sentences-commonly used to investigate speech intelligibility and listening effort may be ill-suited to capture positive experiences when listening is challenging. Here, we investigate how listening to naturalistic spoken stories under acoustic challenges influences the quality of listening experiences. We assess absorption (the feeling of being immersed/engaged in a story), enjoyment, and listening effort and show that (a) story absorption and enjoyment are only minimally affected by moderate speech masking although listening effort increases, (b) thematic knowledge increases absorption and enjoyment and reduces listening effort when listening to a story presented in multitalker babble, and (c) absorption and enjoyment increase and effort decreases over time as individuals listen to several stories successively in multitalker babble. Our research indicates that naturalistic, spoken stories can reveal several concurrent listening experiences and that expertise in a topic can increase engagement and reduce effort. Our work also demonstrates that, although listening effort may increase with speech masking, listeners may still find the experience both absorbing and enjoyable.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Department of Psychology, University of Western Ontario, London, Canada
| | - Ingrid S Johnsrude
- Department of Psychology, University of Western Ontario, London, Canada.,School of Communication Sciences & Disorders, University of Western Ontario, London, Canada
| |
Collapse
|
64
|
Herrmann B, Johnsrude IS. A model of listening engagement (MoLE). Hear Res 2020; 397:108016. [DOI: 10.1016/j.heares.2020.108016] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 04/28/2020] [Accepted: 06/02/2020] [Indexed: 12/30/2022]
|
65
|
Nawaz MZ, Ain QU, Zahid S, Zulfiqar T, Attique SA, Bilal M, Alghamdi HA, Yan W, Iqbal HMN. Physicochemical features and structural analysis of xanthine oxidase as a potential therapeutic target to prevent gout. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2020. [DOI: 10.1080/16878507.2020.1812807] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
| | - Qurat-ul Ain
- Department of Computer Science, University of Agriculture, Faisalabad, Pakistan
| | - Sara Zahid
- Department of Computer Science, University of Agriculture, Faisalabad, Pakistan
| | - Tooba Zulfiqar
- Department of Computer Science, University of Agriculture, Faisalabad, Pakistan
| | - Syed Awais Attique
- Department of Computer Science, University of Agriculture, Faisalabad, Pakistan
| | - Muhammad Bilal
- School of Life Science and Food Engineering, Huaiyin Institute of Technology, Huaian, China
| | - Huda Ahmed Alghamdi
- Department of Biology, College of Sciences, King Khalid University, Abha, Saudi Arabia
| | - Wei Yan
- Department of Marine Science, College of Marine Science and Technology, China University of Geosciences, Wuhan, China
| | - Hafiz M. N. Iqbal
- School of Engineering and Sciences, Tecnologico De Monterrey, Monterrey, Mexico
| |
Collapse
|
66
|
Slade K, Plack CJ, Nuttall HE. The Effects of Age-Related Hearing Loss on the Brain and Cognitive Function. Trends Neurosci 2020; 43:810-821. [PMID: 32826080 DOI: 10.1016/j.tins.2020.07.005] [Citation(s) in RCA: 158] [Impact Index Per Article: 31.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 06/22/2020] [Accepted: 07/14/2020] [Indexed: 12/27/2022]
Abstract
Age-related hearing loss (ARHL) is a common problem for older adults, leading to communication difficulties, isolation, and cognitive decline. Recently, hearing loss has been identified as potentially the most modifiable risk factor for dementia. Listening in challenging situations, or when the auditory system is damaged, strains cortical resources, and this may change how the brain responds to cognitively demanding situations more generally. We review the effects of ARHL on brain areas involved in speech perception, from the auditory cortex, through attentional networks, to the motor system. We explore current perspectives on the possible causal relationship between hearing loss, neural reorganisation, and cognitive impairment. Through this synthesis we aim to inspire innovative research and novel interventions for alleviating hearing loss and cognitive decline.
Collapse
Affiliation(s)
- Kate Slade
- Department of Psychology, Lancaster University, Lancaster, UK
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, UK; Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster, UK.
| |
Collapse
|
67
|
Greenlaw KM, Puschmann S, Coffey EBJ. Decoding of Envelope vs. Fundamental Frequency During Complex Auditory Stream Segregation. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:268-287. [PMID: 37215227 PMCID: PMC10158587 DOI: 10.1162/nol_a_00013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 04/25/2020] [Indexed: 05/24/2023]
Abstract
Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound's amplitude envelope (i.e., syllabic rate or rhythm; 1-9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain's response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.
Collapse
Affiliation(s)
- Keelin M. Greenlaw
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS)
- The Centre for Research on Brain, Language and Music (CRBLM)
| | | | | |
Collapse
|
68
|
Vaden KI, Eckert MA, Dubno JR, Harris KC. Cingulo-opercular adaptive control for younger and older adults during a challenging gap detection task. J Neurosci Res 2020; 98:680-691. [PMID: 31385349 PMCID: PMC7000297 DOI: 10.1002/jnr.24506] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 07/18/2019] [Accepted: 07/19/2019] [Indexed: 11/07/2022]
Abstract
Cingulo-opercular activity is hypothesized to reflect an adaptive control function that optimizes task performance through adjustments in attention and behavior, and outcome monitoring. While auditory perceptual task performance appears to benefit from elevated activity in cingulo-opercular regions of frontal cortex before stimuli are presented, this association appears reduced for older adults compared to younger adults. However, adaptive control function may be limited by difficult task conditions for older adults. An fMRI study was used to characterize adaptive control differences while 15 younger (average age = 24 years) and 15 older adults (average age = 68 years) performed a gap detection in noise task designed to limit age-related differences. During the fMRI study, participants listened to a noise recording and indicated with a button-press whether it contained a gap. Stimuli were presented between sparse fMRI scans (TR = 8.6 s) and BOLD measurements were collected during separate listening and behavioral response intervals. Age-related performance differences were limited by presenting gaps in noise with durations calibrated at or above each participant's detection threshold. Cingulo-opercular BOLD increased significantly throughout listening and behavioral response intervals, relative to a resting baseline. Correct behavioral responses were significantly more likely on trials with elevated pre-stimulus cingulo-opercular BOLD, consistent with an adaptive control framework. Cingulo-opercular adaptive control estimates appeared higher for participants with better gap sensitivity and lower response bias, irrespective of age, which suggests that this mechanism can benefit performance across the lifespan under conditions that limit age-related performance differences.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| |
Collapse
|
69
|
Zekveld AA, Kramer SE, Rönnberg J, Rudner M. In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics. Ear Hear 2019; 40:272-286. [PMID: 29923867 PMCID: PMC6400496 DOI: 10.1097/aud.0000000000000612] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 04/10/2018] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.
Collapse
Affiliation(s)
- Adriana A. Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Sophia E. Kramer
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Linköping, Sweden
| |
Collapse
|
70
|
Francis AL, Love J. Listening effort: Are we measuring cognition or affect, or both? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1514. [PMID: 31381275 DOI: 10.1002/wcs.1514] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/07/2019] [Accepted: 07/10/2019] [Indexed: 12/14/2022]
Abstract
Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| |
Collapse
|
71
|
McLaughlin SA, Thorne JC, Jirikowic T, Waddington T, Lee AKC, Astley Hemingway SJ. Listening Difficulties in Children With Fetal Alcohol Spectrum Disorders: More Than a Problem of Audibility. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1532-1548. [PMID: 31039324 DOI: 10.1044/2018_jslhr-h-18-0359] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Data from standardized caregiver questionnaires indicate that children with fetal alcohol spectrum disorders (FASDs) frequently exhibit atypical auditory behaviors, including reduced responsivity to spoken stimuli. Another body of evidence suggests that prenatal alcohol exposure may result in auditory dysfunction involving loss of audibility (i.e., hearing loss) and/or impaired processing of clearly audible, "suprathreshold" sounds necessary for sound-in-noise listening. Yet, the nexus between atypical auditory behavior and underlying auditory dysfunction in children with FASDs remains largely unexplored. Method To investigate atypical auditory behaviors in FASDs and explore their potential physiological bases, we examined clinical data from 325 children diagnosed with FASDs at the University of Washington using the FASD 4-Digit Diagnostic Code. Atypical behaviors reported on the "auditory filtering" domain of the Short Sensory Profile were assessed to document their prevalence across FASD diagnoses and explore their relationship to reported hearing loss and/or central nervous system measures of cognition, attention, and language function that may indicate suprathreshold processing deficits. Results Atypical auditory behavior was reported among 80% of children with FASDs, a prevalence that did not vary by FASD diagnostic severity or hearing status but was positively correlated with attention-deficit/hyperactivity disorder. In contrast, hearing loss was documented in the clinical records of 40% of children with fetal alcohol syndrome (FAS; a diagnosis on the fetal alcohol spectrum characterized by central nervous system dysfunction, facial dysmorphia, and growth deficiency), 16-fold more prevalent than for those with less severe FASDs (2.4%). Reported hearing loss was significantly associated with physical features characteristic of FAS. Conclusion Children with FAS but not other FASDs may be at a particular risk for hearing loss. However, listening difficulties in the absence of hearing loss-presumably related to suprathreshold processing deficits-are prevalent across the entire fetal alcohol spectrum. The nature and impact of both listening difficulties and hearing loss in FASDs warrant further investigation.
Collapse
Affiliation(s)
- Susan A McLaughlin
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - John C Thorne
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Tracy Jirikowic
- Division of Occupational Therapy, Department of Rehabilitation Medicine, School of Medicine, University of Washington, Seattle
| | - Tiffany Waddington
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - Adrian K C Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Susan J Astley Hemingway
- Department of Epidemiology, University of Washington, Seattle
- Department of Pediatrics, University of Washington, Seattle
| |
Collapse
|
72
|
Parthasarathy A, Bartlett EL, Kujawa SG. Age-related Changes in Neural Coding of Envelope Cues: Peripheral Declines and Central Compensation. Neuroscience 2019; 407:21-31. [DOI: 10.1016/j.neuroscience.2018.12.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 11/30/2018] [Accepted: 12/03/2018] [Indexed: 12/22/2022]
|
73
|
Rudner M, Seeto M, Keidser G, Johnson B, Rönnberg J. Poorer Speech Reception Threshold in Noise Is Associated With Lower Brain Volume in Auditory and Cognitive Processing Regions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1117-1130. [PMID: 31026199 DOI: 10.1044/2018_jslhr-h-ascc7-18-0142] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Hearing loss is associated with changes in brain volume in regions supporting auditory and cognitive processing. The purpose of this study was to determine whether there is a systematic association between hearing ability and brain volume in cross-sectional data from a large nonclinical cohort of middle-aged adults available from the UK Biobank Resource ( http://www.ukbiobank.ac.uk ). Method We performed a set of regression analyses to determine the association between speech reception threshold in noise (SRTn) and global brain volume as well as predefined regions of interest (ROIs) based on T1-weighted structural images, controlling for hearing-related comorbidities and cognition as well as demographic factors. In a 2nd set of analyses, we additionally controlled for hearing aid (HA) use. We predicted statistically significant associations globally and in ROIs including auditory and cognitive processing regions, possibly modulated by HA use. Results Whole-brain gray matter volume was significantly lower for individuals with poorer SRTn. Furthermore, the volume of 9 predicted ROIs including both auditory and cognitive processing regions was lower for individuals with poorer SRTn. The greatest percentage difference (-0.57%) in ROI volume relating to a 1 SD worsening of SRTn was found in the left superior temporal gyrus. HA use did not substantially modulate the pattern of association between brain volume and SRTn. Conclusions In a large middle-aged nonclinical population, poorer hearing ability is associated with lower brain volume globally as well as in cortical and subcortical regions involved in auditory and cognitive processing, but there was no conclusive evidence that this effect is moderated by HA use. This pattern of results supports the notion that poor hearing leads to reduced volume in brain regions recruited during speech understanding under challenging conditions. These findings should be tested in future longitudinal, experimental studies. Supplemental Material https://doi.org/10.23641/asha.7949357.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mark Seeto
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Gitte Keidser
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Blake Johnson
- Department of Cognitive Science, Macquarie University, Sydney, New South Wales, Australia
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
74
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 360] [Impact Index Per Article: 60.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
75
|
Neural Switch Asymmetry in Feature-Based Auditory Attention Tasks. J Assoc Res Otolaryngol 2019; 20:205-215. [PMID: 30675674 DOI: 10.1007/s10162-018-00713-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 12/28/2018] [Indexed: 10/27/2022] Open
Abstract
Active listening involves dynamically switching attention between competing talkers and is essential to following conversations in everyday environments. Previous investigations in human listeners have examined the neural mechanisms that support switching auditory attention within the acoustic featural cues of pitch and auditory space. Here, we explored the cortical circuitry underlying endogenous switching of auditory attention between pitch and spatial cues necessary to discern target from masker words. Because these tasks are of unequal difficulty, we expected an asymmetry in behavioral switch costs for hard-to-easy versus easy-to-hard switches, mirroring prior evidence from vision-based cognitive task-switching paradigms. We investigated the neural correlates of this behavioral switch asymmetry and associated cognitive control operations in the present auditory paradigm. Behaviorally, we observed no switch-cost asymmetry, i.e., no performance difference for switching from the more difficult attend-pitch to the easier attend-space condition (P→S) versus switching from easy-to-hard (S→P). However, left lateral prefrontal cortex activity, correlated with improved performance, was observed during a silent gap period when listeners switched attention from P→S, relative to switching within pitch cues. No such differential activity was seen for the analogous easy-to-hard switch. We hypothesize that this neural switch asymmetry reflects proactive cognitive control mechanisms that successfully reconfigured neurally-specified task parameters and resolved competition from other such "task sets," thereby obviating the expected behavioral switch-cost asymmetry. The neural switch activity observed was generally consistent with that seen in cognitive paradigms, suggesting that established cognitive models of attention switching may be productively applied to better understand similar processes in audition.
Collapse
|
76
|
Rovetti J, Goy H, Pichora-Fuller MK, Russo FA. Functional Near-Infrared Spectroscopy as a Measure of Listening Effort in Older Adults Who Use Hearing Aids. Trends Hear 2019; 23:2331216519886722. [PMID: 31722613 PMCID: PMC6856975 DOI: 10.1177/2331216519886722] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 09/25/2019] [Accepted: 10/01/2019] [Indexed: 02/06/2023] Open
Abstract
Listening effort may be reduced when hearing aids improve access to the acoustic signal. However, this possibility is difficult to evaluate because many neuroimaging methods used to measure listening effort are incompatible with hearing aid use. Functional near-infrared spectroscopy (fNIRS), which can be used to measure the concentration of oxygen in the prefrontal cortex (PFC), appears to be well-suited to this application. The first aim of this study was to establish whether fNIRS could measure cognitive effort during listening in older adults who use hearing aids. The second aim was to use fNIRS to determine if listening effort, a form of cognitive effort, differed depending on whether or not hearing aids were used when listening to sound presented at 35 dB SL (flat gain). Sixteen older adults who were experienced hearing aid users completed an auditory n-back task and a visual n-back task; both tasks were completed with and without hearing aids. We found that PFC oxygenation increased with n-back working memory demand in both modalities, supporting the use of fNIRS to measure cognitive effort during listening in this population. PFC oxygenation was weakly and nonsignificantly correlated with self-reported listening effort and reaction time, respectively, suggesting that PFC oxygenation assesses a dimension of listening effort that differs from these other measures. Furthermore, the extent to which hearing aids reduced PFC oxygenation in the left lateral PFC was positively correlated with age and pure-tone average thresholds. The implications of these findings as well as future directions are discussed.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
| | | | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, ON,
Canada
- Toronto Rehabilitation Institute, ON, Canada
| |
Collapse
|
77
|
Payne BR, Silcox JW. Aging, context processing, and comprehension. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
78
|
Bortfeld H. Functional near-infrared spectroscopy as a tool for assessing speech and spoken language processing in pediatric and adult cochlear implant users. Dev Psychobiol 2018; 61:430-443. [PMID: 30588618 DOI: 10.1002/dev.21818] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 11/04/2018] [Accepted: 11/16/2018] [Indexed: 11/11/2022]
Abstract
Much of what is known about the course of auditory learning in following cochlear implantation is based on behavioral indicators that users are able to perceive sound. Both prelingually deafened children and postlingually deafened adults who receive cochlear implants display highly variable speech and language processing outcomes, although the basis for this is poorly understood. To date, measuring neural activity within the auditory cortex of implant recipients of all ages has been challenging, primarily because the use of traditional neuroimaging techniques is limited by the implant itself. Functional near-infrared spectroscopy (fNIRS) is an imaging technology that works with implant users of all ages because it is non-invasive, compatible with implant devices, and not subject to electrical artifacts. Thus, fNIRS can provide insight into processing factors that contribute to variations in spoken language outcomes in implant users, both children and adults. There are important considerations to be made when using fNIRS, particularly with children, to maximize the signal-to-noise ratio and to best identify and interpret cortical responses. This review considers these issues, recent data, and future directions for using fNIRS as a tool to understand spoken language processing in children and adults who hear through a cochlear implant.
Collapse
Affiliation(s)
- Heather Bortfeld
- Psychological Sciences, University of California, Merced, Merced, California
| |
Collapse
|
79
|
Psychophysiological measurement of affective responses during speech perception. Hear Res 2018; 369:103-119. [PMID: 30135023 DOI: 10.1016/j.heares.2018.07.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 07/02/2018] [Accepted: 07/13/2018] [Indexed: 02/07/2023]
Abstract
When people make decisions about listening, such as whether to continue attending to a particular conversation or whether to wear their hearing aids to a particular restaurant, they do so on the basis of more than just their estimated performance. Recent research has highlighted the vital role of more subjective qualities such as effort, motivation, and fatigue. Here, we argue that the importance of these factors is largely mediated by a listener's emotional response to the listening challenge, and suggest that emotional responses to communication challenges may provide a crucial link between day-to-day communication stress and long-term health. We start by introducing some basic concepts from the study of emotion and affect. We then develop a conceptual framework to guide future research on this topic through examination of a variety of autonomic and peripheral physiological responses that have been employed to investigate both cognitive and affective phenomena related to challenging communication. We conclude by suggesting the need for further investigation of the links between communication difficulties, emotional response, and long-term health, and make some recommendations intended to guide future research on affective psychophysiology in speech communication.
Collapse
|
80
|
Francis AL, Tigchelaar LJ, Zhang R, Zekveld AA. Effects of Second Language Proficiency and Linguistic Uncertainty on Recognition of Speech in Native and Nonnative Competing Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1815-1830. [PMID: 29971338 DOI: 10.1044/2018_jslhr-h-17-0254] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 03/26/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE The purpose of this study was to investigate the effects of 2nd language proficiency and linguistic uncertainty on performance and listening effort in mixed language contexts. METHOD Thirteen native speakers of Dutch with varying degrees of fluency in English listened to and repeated sentences produced in both Dutch and English and presented in the presence of single-talker competing speech in both Dutch and English. Target and masker language combinations were presented in both blocked and mixed (unpredictable) conditions. In the blocked condition, in each block of trials the target-masker language combination remained constant, and the listeners were informed of both prior to beginning the block. In the mixed condition, target and masker language varied randomly from trial to trial. All listeners participated in all conditions. Performance was assessed in terms of speech reception thresholds, whereas listening effort was quantified in terms of pupil dilation. RESULTS Performance (speech reception thresholds) and listening effort (pupil dilation) were both affected by 2nd language proficiency (English test score) and target and masker language: Performance was better in blocked as compared to mixed conditions, with Dutch as compared to English targets, and with English as compared to Dutch maskers. English proficiency was correlated with listening performance. Listeners also exhibited greater peak pupil dilation in mixed as compared to blocked conditions for trials with Dutch maskers, whereas pupil dilation during preparation for speaking was higher for English targets as compared to Dutch ones in almost all conditions. CONCLUSIONS Both listener's proficiency in a 2nd language and uncertainty about the target language on a given trial play a significant role in how bilingual listeners attend to speech in the presence of competing speech in different languages, but precise effects also depend on which language is serving as target and which as masker.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language & Hearing Sciences, Purdue University, West Lafayette, IN
| | | | - Rongrong Zhang
- Department of Statistics, Purdue University, West Lafayette, IN
| | - Adriana A Zekveld
- VU University Medical Center, Amsterdam, the Netherlands
- Linnaeus Centre, Linköping University, Sweden
| |
Collapse
|
81
|
Panouillères MTN, Boyles R, Chesters J, Watkins KE, Möttönen R. Facilitation of motor excitability during listening to spoken sentences is not modulated by noise or semantic coherence. Cortex 2018; 103:44-54. [PMID: 29554541 PMCID: PMC6002609 DOI: 10.1016/j.cortex.2018.02.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/27/2017] [Accepted: 02/08/2018] [Indexed: 11/15/2022]
Abstract
Comprehending speech can be particularly challenging in a noisy environment and in the absence of semantic context. It has been proposed that the articulatory motor system would be recruited especially in difficult listening conditions. However, it remains unknown how signal-to-noise ratio (SNR) and semantic context affect the recruitment of the articulatory motor system when listening to continuous speech. The aim of the present study was to address the hypothesis that involvement of the articulatory motor cortex increases when the intelligibility and clarity of the spoken sentences decreases, because of noise and the lack of semantic context. We applied Transcranial Magnetic Stimulation (TMS) to the lip and hand representations in the primary motor cortex and measured motor evoked potentials from the lip and hand muscles, respectively, to evaluate motor excitability when young adults listened to sentences. In Experiment 1, we found that the excitability of the lip motor cortex was facilitated during listening to both semantically anomalous and coherent sentences in noise relative to non-speech baselines, but neither SNR nor semantic context modulated the facilitation. In Experiment 2, we replicated these findings and found no difference in the excitability of the lip motor cortex between sentences in noise and clear sentences without noise. Thus, our results show that the articulatory motor cortex is involved in speech processing even in optimal and ecologically valid listening conditions and that its involvement is not modulated by the intelligibility and clarity of speech.
Collapse
Affiliation(s)
| | - Rowan Boyles
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Jennifer Chesters
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, United Kingdom.
| |
Collapse
|
82
|
Chiarello C, Vaden KI, Eckert MA. Orthographic influence on spoken word identification: Behavioral and fMRI evidence. Neuropsychologia 2018; 111:103-111. [PMID: 29371094 PMCID: PMC5866781 DOI: 10.1016/j.neuropsychologia.2018.01.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 12/20/2017] [Accepted: 01/21/2018] [Indexed: 10/18/2022]
Abstract
The current study investigated behavioral and neuroimaging evidence for orthographic influences on auditory word identification. To assess such influences, the proportion of similar sounding words (i.e. phonological neighbors) that were also spelled similarly (i.e., orthographic neighbors) was computed for each auditorily presented word as the Orthographic-to-Phonological Overlap Ratio (OPOR). Speech intelligibility was manipulated by presenting monosyllabic words in multi-talker babble at two signal-to-noise ratios: + 3 and + 10 dB SNR. Identification rates were lower for high overlap words in the challenging + 3 dB SNR condition. In addition, BOLD contrast increased with OPOR at the more difficult SNR, and decreased with OPOR under more favorable SNR conditions. Both voxel-based and region of interest analyses demonstrated robust effects of OPOR in several cingulo-opercular regions. However, contrary to prior theoretical accounts, no task-related activity was observed in posterior regions associated with phonological or orthographic processing. We suggest that, when processing is difficult, orthographic-to-phonological feature overlap increases the availability of competing responses, which then requires additional support from domain general performance systems in order to produce a single response.
Collapse
Affiliation(s)
- Christine Chiarello
- Department of Psychology, University of California, Riverside, CA 92521, United States.
| | | | | |
Collapse
|
83
|
Abstract
Fatigue is common in individuals with a variety of chronic health conditions and can have significant negative effects on quality of life. Although limited in scope, recent work suggests persons with hearing loss may be at increased risk for fatigue, in part due to effortful listening that is exacerbated by their hearing impairment. However, the mechanisms responsible for hearing loss-related fatigue, and the efficacy of audiologic interventions for reducing fatigue, remain unclear. To improve our understanding of hearing loss-related fatigue, as a field it is important to develop a common conceptual understanding of this construct. In this article, the broader fatigue literature is reviewed to identify and describe core constructs, consequences, and methods for assessing fatigue and related constructs. Finally, the current knowledge linking hearing loss and fatigue is described and may be summarized as follows: Hearing impairment may increase the risk of subjective fatigue and vigor deficits; adults with hearing loss require more time to recover from fatigue after work and have more work absences; sustained, effortful, listening can be fatiguing; optimal methods for eliciting and measuring fatigue in persons with hearing loss remain unclear and may vary with listening condition; and amplification may minimize decrements in cognitive processing speed during sustained effortful listening. Future research is needed to develop reliable measurement methods to quantify hearing loss-related fatigue, explore factors responsible for modulating fatigue in people with hearing loss, and identify and evaluate potential interventions for reducing hearing loss-related fatigue.
Collapse
|
84
|
Rowland SC, Hartley DEH, Wiggins IM. Listening in Naturalistic Scenes: What Can Functional Near-Infrared Spectroscopy and Intersubject Correlation Analysis Tell Us About the Underlying Brain Activity? Trends Hear 2018; 22:2331216518804116. [PMID: 30345888 PMCID: PMC6198387 DOI: 10.1177/2331216518804116] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Revised: 08/17/2018] [Accepted: 09/06/2018] [Indexed: 12/24/2022] Open
Abstract
Listening to speech in the noisy conditions of everyday life can be effortful, reflecting the increased cognitive workload involved in extracting meaning from a degraded acoustic signal. Studying the underlying neural processes has the potential to provide mechanistic insight into why listening is effortful under certain conditions. In a move toward studying listening effort under ecologically relevant conditions, we used the silent and flexible neuroimaging technique functional near-infrared spectroscopy (fNIRS) to examine brain activity during attentive listening to speech in naturalistic scenes. Thirty normally hearing participants listened to a series of narratives continuously varying in acoustic difficulty while undergoing fNIRS imaging. Participants then listened to another set of closely matched narratives and rated perceived effort and intelligibility for each scene. As expected, self-reported effort generally increased with worsening signal-to-noise ratio. After controlling for better-ear signal-to-noise ratio, perceived effort was greater in scenes that contained competing speech than in those that did not, potentially reflecting an additional cognitive cost of overcoming informational masking. We analyzed the fNIRS data using intersubject correlation, a data-driven approach suitable for analyzing data collected under naturalistic conditions. Significant intersubject correlation was seen in the bilateral auditory cortices and in a range of channels across the prefrontal cortex. The involvement of prefrontal regions is consistent with the notion that higher order cognitive processes are engaged during attentive listening to speech in complex real-world conditions. However, further research is needed to elucidate the relationship between perceived listening effort and activity in these extended cortical networks.
Collapse
Affiliation(s)
- Stephen C. Rowland
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
- Nottingham University Hospitals NHS Trust, Queens Medical Centre, UK
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
| |
Collapse
|
85
|
Winn MB, Wendt D, Koelewijn T, Kuchinsky SE. Best Practices and Advice for Using Pupillometry to Measure Listening Effort: An Introduction for Those Who Want to Get Started. Trends Hear 2018; 22:2331216518800869. [PMID: 30261825 PMCID: PMC6166306 DOI: 10.1177/2331216518800869] [Citation(s) in RCA: 136] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 08/07/2018] [Accepted: 08/14/2018] [Indexed: 01/12/2023] Open
Abstract
Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze. This article contains the basic information and considerations needed to plan, set up, and interpret a pupillometry experiment, as well as commentary about how to interpret the response. Included are practicalities like minimal system requirements for recording a pupil response and specifications for peripheral, equipment, experiment logistics and constraints, and different kinds of data processing. Additional details include participant inclusion and exclusion criteria and some methodological considerations that might not be necessary in other auditory experiments. We discuss what data should be recorded and how to monitor the data quality during recording in order to minimize artifacts. Data processing and analysis are considered as well. Finally, we share insights from the collective experience of the authors and discuss some of the challenges that still lie ahead.
Collapse
Affiliation(s)
- Matthew B. Winn
- Speech-Language-Hearing Sciences,
University
of Minnesota, Minneapolis, MN, USA
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten,
Denmark
- Hearing Systems, Department of
Electrical Engineering, Technical University of Denmark, Kongens Lyngby,
Denmark
| | - Thomas Koelewijn
- Section Ear & Hearing, Department of
Otolaryngology–Head and Neck Surgery, Amsterdam Public Health Research Institute, VU
University Medical Center, the Netherlands
| | - Stefanie E. Kuchinsky
- National Military Audiology and Speech
Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD,
USA
| |
Collapse
|
86
|
A Novel Communication Value Task Demonstrates Evidence of Response Bias in Cases with Presbyacusis. Sci Rep 2017; 7:16512. [PMID: 29184188 PMCID: PMC5705661 DOI: 10.1038/s41598-017-16673-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 11/06/2017] [Indexed: 01/21/2023] Open
Abstract
Decision-making about the expected value of an experience or behavior can explain hearing health behaviors in older adults with hearing loss. Forty-four middle-aged to older adults (68.45 ± 7.73 years) performed a task in which they were asked to decide whether information from a surgeon or an administrative assistant would be important to their health in hypothetical communication scenarios across visual signal-to-noise ratios (SNR). Participants also could choose to view the briefly presented sentences multiple times. The number of these effortful attempts to read the stimuli served as a measure of demand for information to make a health importance decision. Participants with poorer high frequency hearing more frequently decided that information was important to their health compared to participants with better high frequency hearing. This appeared to reflect a response bias because participants with high frequency hearing loss demonstrated shorter response latencies when they rated the sentences as important to their health. However, elevated high frequency hearing thresholds did not predict demand for information to make a health importance decision. The results highlight the utility of a performance-based measure to characterize effort and expected value from performing tasks in older adults with hearing loss.
Collapse
|
87
|
Investigating the role of temporal lobe activation in speech perception accuracy with normal hearing adults: An event-related fNIRS study. Neuropsychologia 2017; 106:31-41. [PMID: 28888891 DOI: 10.1016/j.neuropsychologia.2017.09.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 08/29/2017] [Accepted: 09/04/2017] [Indexed: 12/14/2022]
Abstract
Functional near infrared spectroscopy (fNIRS) is a safe, non-invasive, relatively quiet imaging technique that is tolerant of movement artifact making it uniquely ideal for the assessment of hearing mechanisms. Previous research demonstrates the capacity for fNIRS to detect cortical changes to varying speech intelligibility, revealing a positive relationship between cortical activation amplitude and speech perception score. In the present study, we use an event-related design to investigate the hemodynamic response in the temporal lobe across different listening conditions. We presented participants with a speech recognition task using sentences in quiet, sentences in noise, and vocoded sentences. Hemodynamic responses were examined across conditions and then compared when speech perception was accurate compared to when speech perception was inaccurate in the context of noisy speech. Repeated measures, two-way ANOVAs revealed that the speech in noise condition (-2.8dB signal-to-noise ratio/SNR) demonstrated significantly greater activation than the easier listening conditions on multiple channels bilaterally. Further analyses comparing correct recognition trials to incorrect recognition trials (during the presentation phase of the trial) revealed that activation was significantly greater during correct trials. Lastly, during the repetition phase of the trial, where participants correctly repeated the sentence, the hemodynamic response demonstrated significantly higher deoxyhemoglobin than oxyhemoglobin, indicating a difference between the effects of perception and production on the cortical response. Using fNIRS, the present study adds meaningful evidence to the body of knowledge that describes the brain/behavior relationship related to speech perception.
Collapse
|
88
|
Wisniewski MG, Thompson ER, Iyer N. Theta- and alpha-power enhancements in the electroencephalogram as an auditory delayed match-to-sample task becomes impossibly difficult. Psychophysiology 2017; 54:1916-1928. [PMID: 28792606 DOI: 10.1111/psyp.12968] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2016] [Revised: 05/15/2017] [Accepted: 06/24/2017] [Indexed: 01/06/2023]
Abstract
Recent studies have related enhancements of theta- (∼4-8 Hz) and alpha-power (∼8-13 Hz) to listening effort based on parallels between enhancement and task difficulty. In contrast, nonauditory works demonstrate that, although increases in difficulty are initially accompanied by increases in effort, effort decreases when a task becomes so difficult as to exceed one's ability. Given the latter, we examined whether theta- and alpha-power enhancements thought to reflect effortful listening show a quadratic trend across levels of listening difficulty from impossible to easy. Listeners (n = 14) performed an auditory delayed match-to-sample task with frequency-modulated tonal sweeps under impossible, difficult (at ∼70.7% correct threshold), and easy (well above threshold) conditions. Frontal midline theta-power and posterior alpha-power enhancements were observed during the retention interval, with greatest enhancement in the difficult condition. Independent component-based analyses of data suggest that theta-power enhancements stemmed from medial frontal sources at or near the anterior cingulate cortex, whereas alpha-power effects stemmed from occipital cortices. Results support the notion that theta- and alpha-power enhancements reflect effortful cognitive processes during listening, related to auditory working memory and the inhibition of task-irrelevant cortical processing regions, respectively. Theta- and alpha-power dynamics can be used to characterize the cognitive processes that make up effortful listening, including qualitatively different types of listening effort.
Collapse
Affiliation(s)
| | - Eric R Thompson
- U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio, USA
| | - Nandini Iyer
- U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio, USA
| |
Collapse
|
89
|
Wijayasiri P, Hartley DE, Wiggins IM. Brain activity underlying the recovery of meaning from degraded speech: A functional near-infrared spectroscopy (fNIRS) study. Hear Res 2017; 351:55-67. [DOI: 10.1016/j.heares.2017.05.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 05/11/2017] [Accepted: 05/23/2017] [Indexed: 11/30/2022]
|
90
|
Ohlenforst B, Zekveld AA, Lunner T, Wendt D, Naylor G, Wang Y, Versfeld NJ, Kramer SE. Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation. Hear Res 2017. [DOI: 10.1016/j.heares.2017.05.012] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
91
|
Strauss DJ, Francis AL. Toward a taxonomic model of attention in effortful listening. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:809-825. [PMID: 28567568 PMCID: PMC5548861 DOI: 10.3758/s13415-017-0513-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, there has been increasing interest in studying listening effort. Research on listening effort intersects with the development of active theories of speech perception and contributes to the broader endeavor of understanding speech perception within the context of neuroscientific theories of perception, attention, and effort. Due to the multidisciplinary nature of the problem, researchers vary widely in their precise conceptualization of the catch-all term listening effort. Very recent consensus work stresses the relationship between listening effort and the allocation of cognitive resources, providing a conceptual link to current cognitive neuropsychological theories associating effort with the allocation of selective attention. By linking listening effort to attentional effort, we enable the application of a taxonomy of external and internal attention to the characterization of effortful listening. More specifically, we use a vectorial model to decompose the demand causing listening effort into its mutually orthogonal external and internal components and map the relationship between demanded and exerted effort by means of a resource-limiting term that can represent the influence of motivation as well as vigilance and arousal. Due to its quantitative nature and easy graphical interpretation, this model can be applied to a broad range of problems dealing with listening effort. As such, we conclude that the model provides a good starting point for further research on effortful listening within a more differentiated neuropsychological framework.
Collapse
Affiliation(s)
- Daniel J Strauss
- Systems Neuroscience and Neurotechnology Unit, Neurocenter, Faculty of Medicine, Saarland University & School of Engineering, Building 90.5, 66421, htw saar, Homburg/Saar, Germany.
- Leibniz-Institute for New Materials, Saarbruecken, Germany.
- Key Numerics GmbH - Neurocognitive Technologies, Saarbruecken, Germany.
| | - Alexander L Francis
- Speech Perception and Cognitive Effort Laboratory Department of Speech, Language & Hearing Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
92
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Cingulo-opercular activity affects incidental memory encoding for speech in noise. Neuroimage 2017. [PMID: 28624645 DOI: 10.1016/j.neuroimage.2017.06.028] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| |
Collapse
|
93
|
Cognitive persistence: Development and validation of a novel measure from the Wisconsin Card Sorting Test. Neuropsychologia 2017; 102:95-108. [PMID: 28552783 DOI: 10.1016/j.neuropsychologia.2017.05.027] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Revised: 05/23/2017] [Accepted: 05/25/2017] [Indexed: 12/30/2022]
Abstract
The Wisconsin Card Sorting Test (WCST) has long been used as a neuropsychological assessment of executive function abilities, in particular, cognitive flexibility or "set-shifting". Recent advances in scoring the task have helped to isolate specific WCST performance metrics that index set-shifting abilities and have improved our understanding of how prefrontal and parietal cortex contribute to set-shifting. We present evidence that the ability to overcome task difficulty to achieve a goal, or "cognitive persistence", is another important prefrontal function that is characterized by the WCST and that can be differentiated from efficient set-shifting. This novel measure of cognitive persistence was developed using the WCST-64 in an adult lifespan sample of 230 participants. The measure was validated using individual variation in cingulo-opercular cortex function in a sub-sample of older adults who had completed a challenging speech recognition in noise fMRI task. Specifically, older adults with higher cognitive persistence were more likely to demonstrate word recognition benefit from cingulo-opercular activity. The WCST-derived cognitive persistence measure can be used to disentangle neural processes involved in set-shifting from those involved in persistence.
Collapse
|
94
|
|
95
|
|
96
|
|
97
|
Hearing Impairment and Cognitive Energy: The Framework for Understanding Effortful Listening (FUEL). Ear Hear 2016; 37 Suppl 1:5S-27S. [DOI: 10.1097/aud.0000000000000312] [Citation(s) in RCA: 541] [Impact Index Per Article: 60.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|