1
|
Derawi H, Reinisch E, Gabay Y. Internal Cognitive Load Differentially Influences Acoustic and Lexical Context Effects in Speech Perception: Evidence From a Population With Attention-Deficit/Hyperactivity Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3721-3734. [PMID: 37696049 DOI: 10.1044/2023_jslhr-23-00188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
BACKGROUND To overcome variability in spoken language, listeners utilize various types of context information for disambiguating speech sounds. Context effects have been shown to be affected by cognitive load. However, previous results are mixed regarding the influence of cognitive load on the use of context information in speech perception. PURPOSE We tested a population characterized by an attention-deficit/hyperactivity disorder (ADHD) to better understand the relationship between attention (or internal cognitive load) and context effects. METHOD The use of acoustic versus lexical properties of the surrounding signal to disambiguate speech sounds was examined in listeners with ADHD and neurotypical listeners. RESULTS Compared to neurotypicals, individuals with ADHD relied more strongly on lexical context for speech perception; however, reliance on acoustic context information from speech rate did not differ. CONCLUSION These findings confirm that cognitive load impacts the use of high-level but not low-level context information in speech and imply that speech recognition deficits in ADHD likely arise due to impaired higher order cognitive processes.
Collapse
Affiliation(s)
- Hadeer Derawi
- Department of Special Education, University of Haifa, Israel
- Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Israel
| | - Eva Reinisch
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Yafit Gabay
- Department of Special Education, University of Haifa, Israel
- Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Israel
| |
Collapse
|
2
|
DSL child-Algorithm-Based Hearing Aid Fitting Can Improve Speech Comprehension in Mildly Distressed Patients with Chronic Tinnitus and Mild-to-Moderate Hearing Loss. J Clin Med 2022; 11:jcm11175244. [PMID: 36079176 PMCID: PMC9457182 DOI: 10.3390/jcm11175244] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 08/01/2022] [Accepted: 08/22/2022] [Indexed: 12/14/2022] Open
Abstract
Background: Patients with chronic tinnitus and mild-to-moderate hearing loss (HL) can experience difficulties with speech comprehension (SC). The present study investigated SC benefits of a two-component hearing therapy. Methods: One-hundred-seventy-seven gender-stratified patients underwent binaural DSLchild-algorithm-based hearing aid (HA) fitting and conducted auditory training exercises. SC was measured at four timepoints under three noise interference conditions each (0, 55, and 65 dB): after screening (t0; without HAs), HA- fitting (t1), additional auditory training (t2), and at 70-day follow-up (t3). Repeated-measure analyses of covariance investigated the effects of HAs (t0–t1), auditory training (t1–t2), and the stability of the combined effect (t2–t3) on SC per noise interference level and HL subgroup. Correlational analyses examined associations between SC, age, and psychological indices. Results: Patients showed mildly elevated tinnitus-related distress, which was negatively associated with SC in patients with mild but not moderate HL. At 0 dB, the intervention lastingly improved SC for patients with mild and moderate HL; at 55 dB, for patients with mild HL only. These effects were mainly driven by HAs. Conclusions: The here-investigated treatment demonstrates some SC-benefit under conditions of no or little noise interference. The auditory training component warrants further investigation regarding non-audiological treatment outcomes.
Collapse
|
3
|
Roberts B, Summers RJ, Bailey PJ. Effects of stimulus naturalness and contralateral interferers on lexical bias in consonant identification. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3369. [PMID: 35649936 DOI: 10.1121/10.0011395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 05/02/2022] [Indexed: 06/15/2023]
Abstract
Lexical bias is the tendency to perceive an ambiguous speech sound as a phoneme completing a word; more ambiguity typically causes greater reliance on lexical knowledge. A speech sound ambiguous between /g/ and /k/ is more likely to be perceived as /g/ before /ɪft/ and as /k/ before /ɪs/. The magnitude of this difference-the Ganong shift-increases when high cognitive load limits available processing resources. The effects of stimulus naturalness and informational masking on Ganong shifts and reaction times were explored. Tokens between /gɪ/ and /kɪ/ were generated using morphing software, from which two continua were created ("giss"-"kiss" and "gift"-"kift"). In experiment 1, Ganong shifts were considerably larger for sine- than noise-vocoded versions of these continua, presumably because the spectral sparsity and unnatural timbre of the former increased cognitive load. In experiment 2, noise-vocoded stimuli were presented alone or accompanied by contralateral interferers with constant within-band amplitude envelope, or within-band envelope variation that was the same or different across bands. The latter, with its implied spectro-temporal variation, was predicted to cause the greatest cognitive load. Reaction-time measures matched this prediction; Ganong shifts showed some evidence of greater lexical bias for frequency-varying interferers, but were influenced by context effects and diminished over time.
Collapse
Affiliation(s)
- Brian Roberts
- School of Psychology, Aston University, Birmingham, B4 7ET, United Kingdom
| | - Robert J Summers
- School of Psychology, Aston University, Birmingham, B4 7ET, United Kingdom
| | - Peter J Bailey
- Department of Psychology, University of York, Heslington, York, YO10 5DD, United Kingdom
| |
Collapse
|
4
|
Abstract
The human brain exhibits the remarkable ability to categorize speech sounds into distinct, meaningful percepts, even in challenging tasks like learning non-native speech categories in adulthood and hearing speech in noisy listening conditions. In these scenarios, there is substantial variability in perception and behavior, both across individual listeners and individual trials. While there has been extensive work characterizing stimulus-related and contextual factors that contribute to variability, recent advances in neuroscience are beginning to shed light on another potential source of variability that has not been explored in speech processing. Specifically, there are task-independent, moment-to-moment variations in neural activity in broadly-distributed cortical and subcortical networks that affect how a stimulus is perceived on a trial-by-trial basis. In this review, we discuss factors that affect speech sound learning and moment-to-moment variability in perception, particularly arousal states—neurotransmitter-dependent modulations of cortical activity. We propose that a more complete model of speech perception and learning should incorporate subcortically-mediated arousal states that alter behavior in ways that are distinct from, yet complementary to, top-down cognitive modulations. Finally, we discuss a novel neuromodulation technique, transcutaneous auricular vagus nerve stimulation (taVNS), which is particularly well-suited to investigating causal relationships between arousal mechanisms and performance in a variety of perceptual tasks. Together, these approaches provide novel testable hypotheses for explaining variability in classically challenging tasks, including non-native speech sound learning.
Collapse
|
5
|
Thompson L, White B. Neuropsychological correlates of evocative multimodal speech: The combined roles of fearful prosody, visuospatial attention, cortisol response, and anxiety. Behav Brain Res 2022; 416:113560. [PMID: 34461163 DOI: 10.1016/j.bbr.2021.113560] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 08/04/2021] [Accepted: 08/24/2021] [Indexed: 12/21/2022]
Abstract
Past research reveals left-hemisphere dominance for linguistic processing and right-hemisphere dominance for emotional prosody processing during auditory language comprehension, a pattern also found in visuospatial attention studies where listeners are presented with a view of the talker's face. Is this lateralization pattern for visuospatial attention and language processing upheld when listeners are experiencing a stress response? To investigate this question, participants completed the Trier Social Stress Test (TSST) between administrations of a visuospatial attention and language comprehension dual-task paradigm. Subjective anxiety, cardiovascular, and saliva cortisol measures were taken before and after the TSST. Higher language comprehension scores in the post-TSST neutral prosody condition were associated with lower cortisol responses, differences in blood pressure, and less subjective anxiety. In this challenging task, visuospatial attention was most focused at the mouth region, both prior to and after stress induction. Greater visuospatial attention on the left side of the face image, compared to the right side, indicated greater right hemisphere activation. In the Fear, but not the Neutral, prosody condition, greater cortisol response was associated with greater visuospatial attention to the left side of the face image. Results are placed into theoretical context, and can be applied to situations where stressed listeners must interpret emotionally evocative language.
Collapse
Affiliation(s)
- Laura Thompson
- Clinical Psychology Program, Fielding Graduate University, United States.
| | - Bryan White
- Department of Psychology, New Mexico State University, United States
| |
Collapse
|
6
|
Effects of state anxiety on gait: a 7.5% carbon dioxide challenge study. PSYCHOLOGICAL RESEARCH 2020; 85:2444-2452. [PMID: 32737585 PMCID: PMC8357656 DOI: 10.1007/s00426-020-01393-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 07/14/2020] [Indexed: 11/20/2022]
Abstract
We used the 7.5% carbon dioxide (CO2) model of anxiety induction to investigate the effects of state anxiety on normal gait and gait when navigating an obstacle. Healthy volunteers (n = 22) completed a walking task during inhalations of 7.5% CO2 and medical air (placebo) in a within-subjects design. The order of inhalation was counterbalanced across participants and the gas was administered double-blind. Over a series of trials, participants walked the length of the laboratory, with each trial requiring participants to navigate through an aperture (width adjusted to participant size), with gait parameters measured via a motion capture system. The main findings were that walking speed was slower, but the adjustment in body orientation was greater, during 7.5% CO2 inhalation compared to air. These findings indicate changes in locomotor behaviour during heightened state anxiety that may reflect greater caution when moving in an agitated state. Advances in sensing technology offer the opportunity to monitor locomotor behaviour, and these findings suggest that in doing so, we may be able to infer emotional states from movement in naturalistic settings.
Collapse
|
7
|
Abstract
We used the 7.5% carbon dioxide model of anxiety induction to investigate the effects of state anxiety on simple information processing. In both high- and low-anxious states, participants (n = 36) completed an auditory–visual matching task and a visual binary categorization task. The stimuli were either degraded or clear, so as to investigate whether the effects of anxiety are greater when signal clarity is compromised. Accuracy in the matching task was lower during CO2 inhalation and for degraded stimuli. In the categorization task, response times and indecision (measured using mouse trajectories) were greater during CO2 inhalation and for degraded stimuli. For most measures, we found no evidence of Gas × Clarity interactions. These data indicate that state anxiety negatively impacts simple information processing and do not support claims that anxiety may benefit performance in low-cognitively-demanding tasks. These findings have important implications for understanding the impact of state anxiety in real-world situations.
Collapse
|
8
|
Lam BPW, Xie Z, Tessmer R, Chandrasekaran B. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1662-1673. [PMID: 28586824 PMCID: PMC5544416 DOI: 10.1044/2017_jslhr-h-16-0133] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2016] [Revised: 09/15/2016] [Accepted: 12/15/2016] [Indexed: 06/01/2023]
Abstract
PURPOSE Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). METHOD Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. RESULTS Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. CONCLUSION Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing.
Collapse
Affiliation(s)
- Boji P. W. Lam
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Zilong Xie
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Rachel Tessmer
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
| | - Bharath Chandrasekaran
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin
- Department of Psychology, College of Liberal Arts, The University of Texas at Austin
- Institute for Mental Health Research, College of Liberal Arts, The University of Texas at Austin
- Department of Linguistics, College of Liberal Arts, The University of Texas at Austin
- Institute for Neuroscience, The University of Texas at Austin
| |
Collapse
|
9
|
Button KS, Karwatowska L, Kounali D, Munafò MR, Attwood AS. Acute anxiety and social inference: An experimental manipulation with 7.5% carbon dioxide inhalation. J Psychopharmacol 2016; 30:1036-46. [PMID: 27380750 PMCID: PMC5036074 DOI: 10.1177/0269881116653105] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND Positive self-bias is thought to be protective for mental health. We previously found that the degree of positive bias when learning self-referential social evaluation decreases with increasing social anxiety. It is unclear whether this reduction is driven by differences in state or trait anxiety, as both are elevated in social anxiety; therefore, we examined the effects on the state of anxiety induced by the 7.5% carbon dioxide (CO2) inhalation model of generalised anxiety disorder (GAD) on social evaluation learning. METHODS For our study, 48 (24 of female gender) healthy volunteers took two inhalations (medical air and 7.5% CO2, counterbalanced) whilst learning social rules (self-like, self-dislike, other-like and other-dislike) in an instrumental social evaluation learning task. We analysed the outcomes (number of positive responses and errors to criterion) using the random effects Poisson regression. RESULTS Participants made fewer and more positive responses when breathing 7.5% CO2 in the other-like and other-dislike rules, respectively (gas × condition × rule interaction p = 0.03). Individuals made fewer errors learning self-like than self-dislike, and this positive self-bias was unaffected by CO2. Breathing 7.5% CO2 increased errors, but only in the other-referential rules (gas × condition × rule interaction p = 0.003). CONCLUSIONS Positive self-bias (i.e. fewer errors learning self-like than self-dislike) seemed robust to changes in state anxiety. In contrast, learning other-referential evaluation was impaired as state anxiety increased. This suggested that the previously observed variations in self-bias arise due to trait, rather than state, characteristics.
Collapse
Affiliation(s)
| | - Lucy Karwatowska
- UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK
| | - Daphne Kounali
- School of Social and Community Medicine, University of Bristol, Bristol, UK
| | - Marcus R Munafò
- UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK,Integrative Epidemiology Unit, Medical Research Council, University of Bristol, Bristol, UK
| | - Angela S Attwood
- UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK,Integrative Epidemiology Unit, Medical Research Council, University of Bristol, Bristol, UK
| |
Collapse
|
10
|
Francis AL, MacPherson MK, Chandrasekaran B, Alvar AM. Autonomic Nervous System Responses During Perception of Masked Speech may Reflect Constructs other than Subjective Listening Effort. Front Psychol 2016; 7:263. [PMID: 26973564 PMCID: PMC4772584 DOI: 10.3389/fpsyg.2016.00263] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 02/10/2016] [Indexed: 12/05/2022] Open
Abstract
Typically, understanding speech seems effortless and automatic. However, a variety of factors may, independently or interactively, make listening more effortful. Physiological measures may help to distinguish between the application of different cognitive mechanisms whose operation is perceived as effortful. In the present study, physiological and behavioral measures associated with task demand were collected along with behavioral measures of performance while participants listened to and repeated sentences. The goal was to measure psychophysiological reactivity associated with three degraded listening conditions, each of which differed in terms of the source of the difficulty (distortion, energetic masking, and informational masking), and therefore were expected to engage different cognitive mechanisms. These conditions were chosen to be matched for overall performance (keywords correct), and were compared to listening to unmasked speech produced by a natural voice. The three degraded conditions were: (1) Unmasked speech produced by a computer speech synthesizer, (2) Speech produced by a natural voice and masked byspeech-shaped noise and (3) Speech produced by a natural voice and masked by two-talker babble. Masked conditions were both presented at a -8 dB signal to noise ratio (SNR), a level shown in previous research to result in comparable levels of performance for these stimuli and maskers. Performance was measured in terms of proportion of key words identified correctly, and task demand or effort was quantified subjectively by self-report. Measures of psychophysiological reactivity included electrodermal (skin conductance) response frequency and amplitude, blood pulse amplitude and pulse rate. Results suggest that the two masked conditions evoked stronger psychophysiological reactivity than did the two unmasked conditions even when behavioral measures of listening performance and listeners’ subjective perception of task demand were comparable across the three degraded conditions.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette IN, USA
| | - Megan K MacPherson
- School of Communication Science and Disorders, Florida State University, Tallahassee FL, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Texas at Austin Austin, TX, USA
| | - Ann M Alvar
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette IN, USA
| |
Collapse
|
11
|
Fluharty ME, Attwood AS, Munafò MR. Anxiety sensitivity and trait anxiety are associated with response to 7.5% carbon dioxide challenge. J Psychopharmacol 2016; 30:182-7. [PMID: 26561530 PMCID: PMC4724859 DOI: 10.1177/0269881115615105] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The 7.5% carbon dioxide (CO2) inhalation model is used to provoke acute anxiety, for example to investigate the effects of anxiety on cognitive processes, or the efficacy of novel anxiolytic agents. However, little is known about the relationship of baseline anxiety sensitivity or trait anxiety (i.e., anxiety proneness), with an individual's response to the 7.5% CO2 challenge. We examined data from a number of 7.5% CO2 challenge studies to determine whether anxiety proneness was related to subjective or physiological response. Our findings indicate anxiety proneness is associated with greater subjective and physiological responses. However, anxiety-prone individuals also have a greater subjective response to the placebo (medical air) condition. This suggests that anxiety-prone individuals not only respond more strongly to the 7.5% CO2 challenge, but also to medical air. Implications for the design and conduct of 7.5% CO2 challenge studies are discussed.
Collapse
Affiliation(s)
- Meg E Fluharty
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, Bristol, UK UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK
| | - Angela S Attwood
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, Bristol, UK,UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK
| | - Marcus R Munafò
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, Bristol, UK,UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, UK
| |
Collapse
|
12
|
The role of attentional abilities in lexically guided perceptual learning by older listeners. Atten Percept Psychophys 2015; 77:493-507. [PMID: 25373441 DOI: 10.3758/s13414-014-0792-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study investigates two variables that may modify lexically guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+ years, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically guided perceptual learning. Our results, moreover, clearly show that lexically guided perceptual learning is not lost when acoustic processing is less accurate.
Collapse
|
13
|
Borrie SA. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:1473-80. [PMID: 25786958 DOI: 10.1121/1.4913770] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.
Collapse
Affiliation(s)
- Stephanie A Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322
| |
Collapse
|
14
|
Pinkney V, Wickens R, Bamford S, Baldwin DS, Garner M. Defensive eye-blink startle responses in a human experimental model of anxiety. J Psychopharmacol 2014; 28:874-80. [PMID: 24899597 PMCID: PMC4876426 DOI: 10.1177/0269881114532858] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Inhalation of low concentrations of carbon dioxide (CO2) triggers anxious behaviours in rodents via chemosensors in the amygdala, and increases anxiety, autonomic arousal and hypervigilance in healthy humans. However, it is not known whether CO2 inhalation modulates defensive behaviours coordinated by this network in humans. We examined the effect of 7.5% CO2 challenge on the defensive eye-blink startle response. A total of 27 healthy volunteers completed an affective startle task during inhalation of 7.5% CO2 and air. The magnitude and latency of startle eye-blinks were recorded whilst participants viewed aversive and neutral pictures. We found that 7.5% CO2 increased state anxiety and raised concurrent measures of skin conductance and heart rate (HR). CO2 challenge did not increase startle magnitude, but slowed the onset of startle eye-blinks. The effect of CO2 challenge on HR covaried with its effects on both subjective anxiety and startle latency. Our findings are discussed with reference to startle profiles during conditions of interoceptive threat, increased cognitive load and in populations characterised by anxiety, compared with acute fear and panic.
Collapse
Affiliation(s)
| | - Robin Wickens
- Department of Pharmacy and Pharmacology, University of Bath, Bath, UK
| | - Susan Bamford
- Psychology, University of Southampton, Southampton, UK
| | - David S Baldwin
- Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| | - Matthew Garner
- Psychology, University of Southampton, Southampton, UK Clinical and Experimental Sciences, University of Southampton, Southampton, UK
| |
Collapse
|
15
|
Chandrasekaran B, Van Engen K, Xie Z, Beevers CG, Maddox WT. Influence of depressive symptoms on speech perception in adverse listening conditions. Cogn Emot 2014; 29:900-9. [PMID: 25090306 DOI: 10.1080/02699931.2014.944106] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
It is widely acknowledged that individuals with elevated depressive symptoms exhibit deficits in inter-personal communication. Research has primarily focused on speech production in individuals with elevated depressive symptoms. Little is known about speech perception in individuals with elevated depressive symptoms, especially in challenging listening conditions. Here, we examined speech perception in young adults with low- or high-depressive (HD) symptoms in the presence of a range of maskers. Maskers were selected to reflect various levels of informational masking (IM), which refers to cognitive interference due to signal and masker similarity, and energetic masking (EM), which refers to peripheral interference due to signal degradation by the masker. Speech intelligibility data revealed that individuals with HD symptoms did not differ from those with low-depressive symptoms during EM, but they exhibited a selective deficit during IM. Since IM is a common occurrence in real-world social settings, this listening deficit may exacerbate communicative difficulties.
Collapse
Affiliation(s)
- Bharath Chandrasekaran
- a Department of Communication Sciences & Disorders , The University of Texas at Austin , Austin , TX , USA
| | | | | | | | | |
Collapse
|