1
|
Pi Y, Yan J, Pscherer C, Gao S, Mückschel M, Colzato L, Hommel B, Beste C. Interindividual aperiodic resting-state EEG activity predicts cognitive-control styles. Psychophysiology 2024:e14576. [PMID: 38556626 DOI: 10.1111/psyp.14576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/01/2024] [Accepted: 03/20/2024] [Indexed: 04/02/2024]
Abstract
The ability to find the right balance between more persistent and more flexible cognitive-control styles is known as "metacontrol." Recent findings suggest a relevance of aperiodic EEG activity and task conditions that are likely to elicit a specific metacontrol style. Here we investigated whether individual differences in aperiodic EEG activity obtained off-task (during resting state) predict individual cognitive-control styles under task conditions that pose different demands on metacontrol. We analyzed EEG resting-state data, task-EEG, and behavioral outcomes from a sample of N = 65 healthy participants performing a Go/Nogo task. We examined aperiodic activity as indicator of "neural noise" in the EEG power spectrum, and participants were assigned to a high-noise or low-noise group according to a median split of the exponents obtained for resting state. We found that off-task aperiodic exponents predicted different cognitive-control styles in Go and Nogo conditions: Overall, aperiodic exponents were higher (i.e., noise was lower) in the low-noise group, who however showed no difference between Go and Nogo trials, whereas the high-noise group exhibited significant noise reduction in the more persistence-heavy Nogo condition. This suggests that trait-like biases determine the default cognitive-control style, which however can be overwritten or compensated for under challenging task demands. We suggest that aperiodic activity in EEG signals represents valid indicators of highly dynamic arbitration between metacontrol styles, representing the brain's capability to reorganize itself and adapt its neural activity patterns to changing environmental conditions.
Collapse
Affiliation(s)
- Yu Pi
- Department of Psychology, Shandong Normal University, Jinan, China
| | - Jimin Yan
- Department of Psychology, Shandong Normal University, Jinan, China
| | - Charlotte Pscherer
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine, TU Dresden, Dresden, Germany
| | - Shudan Gao
- Department of Psychology, Shandong Normal University, Jinan, China
| | - Moritz Mückschel
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine, TU Dresden, Dresden, Germany
| | - Lorenza Colzato
- Department of Psychology, Shandong Normal University, Jinan, China
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine, TU Dresden, Dresden, Germany
| | - Bernhard Hommel
- Department of Psychology, Shandong Normal University, Jinan, China
| | - Christian Beste
- Department of Psychology, Shandong Normal University, Jinan, China
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine, TU Dresden, Dresden, Germany
| |
Collapse
|
2
|
Liu J, Hilton CB, Bergelson E, Mehr SA. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol 2023; 33:1916-1925.e4. [PMID: 37105166 PMCID: PMC10306420 DOI: 10.1016/j.cub.2023.03.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/08/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.
Collapse
Affiliation(s)
- Jingxuan Liu
- Columbia Business School, Columbia University, 665 W 130th Street, New York, NY 10027, USA; Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA.
| | - Courtney B Hilton
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| | - Elika Bergelson
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA
| | - Samuel A Mehr
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| |
Collapse
|
3
|
Rimmele JM, Sun Y, Michalareas G, Ghitza O, Poeppel D. Dynamics of Functional Networks for Syllable and Word-Level Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:120-144. [PMID: 37229144 PMCID: PMC10205074 DOI: 10.1162/nol_a_00089] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 11/07/2022] [Indexed: 05/27/2023]
Abstract
Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/s. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing; and (ii) processing of words activates brain areas that interact with acoustic syllable processing. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Lexical content resulted, additionally, in increased neural activity. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions; however, not when conditions were compared separately. The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music and Emotion, Frankfurt am Main, Germany; New York, NY, USA
| | - Yue Sun
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Georgios Michalareas
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Oded Ghitza
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- College of Biomedical Engineering & Hearing Research Center, Boston University, Boston, MA, USA
| | - David Poeppel
- Departments of Neuroscience and Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music and Emotion, Frankfurt am Main, Germany; New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
| |
Collapse
|
4
|
Beck J, Konieczny L. What a difference a syllable makes-Rhythmic reading of poetry. Front Psychol 2023; 14:1043651. [PMID: 36865353 PMCID: PMC9973453 DOI: 10.3389/fpsyg.2023.1043651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 01/06/2023] [Indexed: 02/15/2023] Open
Abstract
In reading conventional poems aloud, the rhythmic experience is coupled with the projection of meter, enabling the prediction of subsequent input. However, it is unclear how top-down and bottom-up processes interact. If the rhythmicity in reading loud is governed by the top-down prediction of metric patterns of weak and strong stress, these should be projected also onto a randomly included, lexically meaningless syllable. If bottom-up information such as the phonetic quality of consecutive syllables plays a functional role in establishing a structured rhythm, the occurrence of the lexically meaningless syllable should affect reading and the number of these syllables in a metrical line should modulate this effect. To investigate this, we manipulated poems by replacing regular syllables at random positions with the syllable "tack". Participants were instructed to read the poems aloud and their voice was recorded during the reading. At the syllable level, we calculated the syllable onset interval (SOI) as a measure of articulation duration, as well as the mean syllable intensity. Both measures were supposed to operationalize how strongly a syllable was stressed. Results show that the average articulation duration of metrically strong regular syllables was longer than for weak syllables. This effect disappeared for "tacks". Syllable intensities, on the other hand, captured metrical stress of "tacks" as well, but only for musically active participants. Additionally, we calculated the normalized pairwise variability index (nPVI) for each line as an indicator for rhythmic contrast, i.e., the alternation between long and short, as well as louder and quieter syllables, to estimate the influence of "tacks" on reading rhythm. For SOI the nPVI revealed a clear negative effect: When "tacks" occurred, lines appeared to be read less altering, and this effect was proportional to the number of tacks per line. For intensity, however, the nPVI did not capture significant effects. Results suggests that top-down prediction does not always suffice to maintain a rhythmic gestalt across a series of syllables that carry little bottom-up prosodic information. Instead, the constant integration of sufficiently varying bottom-up information appears necessary to maintain a stable metrical pattern prediction.
Collapse
Affiliation(s)
- Judith Beck
- Center for Cognitive Science, Institute of Psychology, University of Freiburg, Freiburg, Germany
| | | |
Collapse
|
5
|
Luo L, Lu L. Studying rhythm processing in speech through the lens of auditory-motor synchronization. Front Neurosci 2023; 17:1146298. [PMID: 36937684 PMCID: PMC10017839 DOI: 10.3389/fnins.2023.1146298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 02/20/2023] [Indexed: 03/06/2023] Open
Abstract
Continuous speech is organized into a hierarchy of rhythms. Accurate processing of this rhythmic hierarchy through the interactions of auditory and motor systems is fundamental to speech perception and production. In this mini-review, we aim to evaluate the implementation of behavioral auditory-motor synchronization paradigms when studying rhythm processing in speech. First, we present an overview of the classic finger-tapping paradigm and its application in revealing differences in auditory-motor synchronization between the typical and clinical populations. Next, we highlight key findings on rhythm hierarchy processing in speech and non-speech stimuli from finger-tapping studies. Following this, we discuss the potential caveats of the finger-tapping paradigm and propose the speech-speech synchronization (SSS) task as a promising tool for future studies. Overall, we seek to raise interest in developing new methods to shed light on the neural mechanisms of speech processing.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychology, Beijing Sport University, Beijing, China
- Laboratory of Sports Stress and Adaptation of General Administration of Sport, Beijing, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, China
- *Correspondence: Lingxi Lu,
| |
Collapse
|
6
|
Lau JCY, Fyshe A, Waxman SR. Rhythm May Be Key to Linking Language and Cognition in Young Infants: Evidence From Machine Learning. Front Psychol 2022; 13:894405. [PMID: 35693512 PMCID: PMC9178268 DOI: 10.3389/fpsyg.2022.894405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/03/2022] [Indexed: 11/30/2022] Open
Abstract
Rhythm is key to language acquisition. Across languages, rhythmic features highlight fundamental linguistic elements of the sound stream and structural relations among them. A sensitivity to rhythmic features, which begins in utero, is evident at birth. What is less clear is whether rhythm supports infants' earliest links between language and cognition. Prior evidence has documented that for infants as young as 3 and 4 months, listening to their native language (English) supports the core cognitive capacity of object categorization. This precocious link is initially part of a broader template: listening to a non-native language from the same rhythmic class as (e.g., German, but not Cantonese) and to vocalizations of non-human primates (e.g., lemur, Eulemur macaco flavifrons, but not birds e.g., zebra-finches, Taeniopygia guttata) provide English-acquiring infants the same cognitive advantage as does listening to their native language. Here, we implement a machine-learning (ML) approach to ask whether there are acoustic properties, available on the surface of these vocalizations, that permit infants' to identify which vocalizations are candidate links to cognition. We provided the model with a robust sample of vocalizations that, from the vantage point of English-acquiring 4-month-olds, either support object categorization (English, German, lemur vocalizations) or fail to do so (Cantonese, zebra-finch vocalizations). We assess (a) whether supervised ML classification models can distinguish those vocalizations that support cognition from those that do not, and (b) which class(es) of acoustic features (including rhythmic, spectral envelope, and pitch features) best support that classification. Our analysis reveals that principal components derived from rhythm-relevant acoustic features were among the most robust in supporting the classification. Classifications performed using temporal envelope components were also robust. These new findings provide in principle evidence that infants' earliest links between vocalizations and cognition may be subserved by their perceptual sensitivity to rhythmic and spectral elements available on the surface of these vocalizations, and that these may guide infants' identification of candidate links to cognition.
Collapse
Affiliation(s)
- Joseph C. Y. Lau
- Department of Psychology, Northwestern University, Evanston, IL, United States
- Institute for Policy Research, Northwestern University, Evanston, IL, United States
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Alona Fyshe
- Department of Computing Science and Psychology, University of Alberta, Edmonton, AB, Canada
| | - Sandra R. Waxman
- Department of Psychology, Northwestern University, Evanston, IL, United States
- Institute for Policy Research, Northwestern University, Evanston, IL, United States
| |
Collapse
|
7
|
Acoustically Driven Cortical δ Oscillations Underpin Prosodic Chunking. eNeuro 2021; 8:ENEURO.0562-20.2021. [PMID: 34083380 PMCID: PMC8272402 DOI: 10.1523/eneuro.0562-20.2021] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/05/2021] [Accepted: 05/09/2021] [Indexed: 12/21/2022] Open
Abstract
Oscillation-based models of speech perception postulate a cortical computational principle by which decoding is performed within a window structure derived by a segmentation process. Segmentation of syllable-size chunks is realized by a θ oscillator. We provide evidence for an analogous role of a δ oscillator in the segmentation of phrase-sized chunks. We recorded magnetoencephalography (MEG) in humans, while participants performed a target identification task. Random-digit strings, with phrase-long chunks of two digits, were presented at chunk rates of 1.8 or 2.6 Hz, inside or outside the δ frequency band (defined here to be 0.5–2 Hz). Strong periodicities were elicited by chunk rates inside of δ in superior, middle temporal areas and speech-motor integration areas. Periodicities were diminished or absent for chunk rates outside δ, in line with behavioral performance. Our findings show that prosodic chunking of phrase-sized acoustic segments is correlated with acoustic-driven δ oscillations, expressing anatomically specific patterns of neuronal periodicities.
Collapse
|