1
|
Kurteff GL, Lester-Smith RA, Martinez A, Currens N, Holder J, Villarreal C, Mercado VR, Truong C, Huber C, Pokharel P, Hamilton LS. Speaker-induced Suppression in EEG during a Naturalistic Reading and Listening Task. J Cogn Neurosci 2023; 35:1538-1556. [PMID: 37584593 DOI: 10.1162/jocn_a_02037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.
Collapse
|
2
|
Encoding category-level and context-specific phonological information at different stages: An EEG study of Mandarin third-tone sandhi word production. Neuropsychologia 2022; 175:108367. [PMID: 36084698 DOI: 10.1016/j.neuropsychologia.2022.108367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 07/02/2022] [Accepted: 09/01/2022] [Indexed: 11/24/2022]
Abstract
Pronunciation of words or morphemes may vary systematically in different phonological contexts, but it remains unclear how different levels of phonological information are encoded in speech production. In this study, we investigated the online planning process of Mandarin Tone 3 (T3) sandhi, a case of phonological alternation whereby a low-dipping tone (T3) changes to a Tone 2 (T2)-like rising tone when followed by another T3. To examine the time course of the encoding of the abstract category-level (underlying form) and context-specific phonological form (surface form) of T3, we conducted an electroencephalographic (EEG) study with a phonologically-primed picture naming task and examined the event-related potentials (ERPs) time-locked to the stimulus onset as well as speech response onset. The behavioral results showed that targets primed by T3 or T2 primes yielded shorter naming latencies than those primed by control primes. Importantly, the EEG data revealed that T3 primes elicited larger positive amplitude over broad frontocentral regions roughly in the 320-550 ms time window of stimulus-locked ERP and -500 to -400 ms time window of response-locked ERP, whereas T2 primes elicited larger negative amplitude over left frontocentral regions roughly in the -240 to -100 ms time window of response-locked ERP. These results indicate that the underlying and the surface form are encoded at different processing stages. The former presumably occurs in the earlier phonological encoding stage, while the latter probably occurs in the later phonetic encoding or motor preparation stage. The current study offers important implications for understanding the processing of phonological alternations and tonal encoding in Chinese word production.
Collapse
|
3
|
Key AP, Yan Y, Metelko M, Chang C, Kang H, Pilkington J, Corbett BA. Greater Social Competence Is Associated With Higher Interpersonal Neural Synchrony in Adolescents With Autism. Front Hum Neurosci 2022; 15:790085. [PMID: 35069156 PMCID: PMC8770262 DOI: 10.3389/fnhum.2021.790085] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 12/16/2021] [Indexed: 11/22/2022] Open
Abstract
Difficulty engaging in reciprocal social interactions is a core characteristic of autism spectrum disorder. The mechanisms supporting effective dynamic real-time social exchanges are not yet well understood. This proof-of-concept hyperscanning electroencephalography study examined neural synchrony as the mechanism supporting interpersonal social interaction in 34 adolescents with autism spectrum disorder (50% female), age 10-16 years, paired with neurotypical confederates of similar age. The degree of brain-to-brain neural synchrony was quantified at temporo-parietal scalp locations as the circular correlation of oscillatory amplitudes in theta, alpha, and beta frequency bands while the participants engaged in a friendly conversation. In line with the hypotheses, interpersonal neural synchrony was significantly greater during the social interaction compared to the baseline. Lower levels of synchrony were associated with increased behavioral symptoms of social difficulties. With regard to sex differences, we found evidence for stronger interpersonal neural synchrony during conversation than baseline in females with autism, but not in male participants, for whom such condition differences did not reach statistical significance. This study established the feasibility of hyperscanning during real-time social interactions as an informative approach to examine social competence in autism, demonstrated that neural coordination of activity between the interacting brains may contribute to social behavior, and offered new insights into sex-related variability in social functioning in individuals with autism spectrum disorders.
Collapse
Affiliation(s)
- Alexandra P. Key
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, United States,Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States,*Correspondence: Alexandra P. Key
| | - Yan Yan
- Vanderbilt University, Nashville, TN, United States
| | - Mary Metelko
- Institute for Software Integrated Systems, Vanderbilt University, Nashville, TN, United States
| | - Catie Chang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Hakmook Kang
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, United States,Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Jennifer Pilkington
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Blythe A. Corbett
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, United States,Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|