1
|
te Rietmolen N, Strijkers K, Morillon B. Moving rhythmically can facilitate naturalistic speech perception in a noisy environment. Proc Biol Sci 2025; 292:20250354. [PMID: 40199360 PMCID: PMC11978457 DOI: 10.1098/rspb.2025.0354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 03/14/2025] [Accepted: 03/14/2025] [Indexed: 04/10/2025] Open
Abstract
The motor system is known to process temporal information, and moving rhythmically while listening to a melody can improve auditory processing. In three interrelated behavioural experiments, we demonstrate that this effect translates to speech processing. Motor priming improves the efficiency of subsequent naturalistic speech-in-noise processing under specific conditions. (i) Moving rhythmically at the lexical rate (~1.8 Hz) significantly improves subsequent speech processing compared to moving at other rates, such as the phrasal or syllabic rates. (ii) The impact of such rhythmic motor priming is not influenced by whether it is self-generated or triggered by an auditory beat. (iii) Overt lexical vocalization, regardless of its semantic content, also enhances the efficiency of subsequent speech processing. These findings provide evidence for the functional role of the motor system in processing the temporal dynamics of naturalistic speech.
Collapse
Affiliation(s)
- Noémie te Rietmolen
- Institute for Language, Communication, and the Brain (ILCB), Aix-Marseille Université, Marseille, France
| | - Kristof Strijkers
- Laboratoire Parole et Langage (LPL), Aix-Marseille Université & CNRS, Aix-en-Provence, France
| | - Benjamin Morillon
- INSERM, Institut de Neurosciences des Systèmes (INS), Aix Marseille Université, Marseille, France
| |
Collapse
|
2
|
Zhu M, Chen F, Chen W, Zhang Y. The Impact of Executive Functions and Musicality on Speech Auditory-Motor Synchronization in Adults Who Stutter. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:54-68. [PMID: 39680799 DOI: 10.1044/2024_jslhr-24-00141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2024]
Abstract
PURPOSE Stuttering is a neurodevelopmental disorder that disrupts the timing and rhythmic flow of speech production. There is growing evidence indicating that abnormal interactions between the auditory and motor cortices contribute to the development of stuttering. The present study investigated speech auditory-motor synchronization in stuttering adults and the influential factors behind it as compared to individuals without stuttering. METHOD Sixteen Mandarin-speaking adults with stuttering and 19 fluent controls, who were matched for age, gender, and years of musical training, participated in the current study. Their ability to synchronize vocal speech production with accelerating auditory sequences was assessed using the spontaneous speech-to-speech synchronization test (SSS test). Additionally, all participants conducted a series of standardized behavioral tests to evaluate their musicality and executive functions. RESULTS Stutterers achieved significantly lower phase locking values in the SSS test compared to nonstuttering controls, indicating a potential rhythmic processing deficit in developmental stuttering. Moreover, the strength of speech auditory-motor synchronization in stutterers was significantly associated with their performance in tasks such as digit span and nonword repetition. This finding further emphasizes the strong link between rhythmic processing and working memory. CONCLUSIONS This study provides compelling evidence for the speech rhythmic deficit in individuals with stuttering by incorporating auditory-motor processes. It would offer valuable insights into the intricate relationship between language and the brain and shed light on the potential benefits of cognitive training for speech intervention in individuals with stuttering difficulties. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27984362.
Collapse
Affiliation(s)
- Min Zhu
- School of Foreign Languages, Hunan University, Changsha, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Weiping Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, The University of Minnesota, Twin Cities
| |
Collapse
|
3
|
Oderbolz C, Stark E, Sauppe S, Meyer M. Concurrent processing of the prosodic hierarchy is supported by cortical entrainment and phase-amplitude coupling. Cereb Cortex 2024; 34:bhae479. [PMID: 39704246 DOI: 10.1093/cercor/bhae479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 10/30/2024] [Accepted: 11/28/2024] [Indexed: 12/21/2024] Open
Abstract
Models of phonology posit a hierarchy of prosodic units that is relatively independent from syntactic structure, requiring its own parsing. It remains unexplored how this prosodic hierarchy is represented in the brain. We investigated this foundational question by means of an electroencephalography (EEG) study. Thirty young adults listened to German sentences containing manipulations at different levels of the prosodic hierarchy. Evaluating speech-to-brain cortical entrainment and phase-amplitude coupling revealed that prosody's hierarchical structure is maintained at the neural level during spoken language comprehension. The faithfulness of this tracking varied as a function of the hierarchy's degree of intactness as well as systematic interindividual differences in audio-motor synchronization abilities. The results underscore the role of complex oscillatory mechanisms in configuring the continuous and hierarchical nature of the speech signal and situate prosody as a structure indispensable from theoretical perspectives on spoken language comprehension in the brain.
Collapse
Affiliation(s)
- Chantal Oderbolz
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Rd NW, Washington D.C. 20057, United States
| | - Elisabeth Stark
- Zurich Center for Linguistics, University of Zurich, Andreasstrasse 15, 8050 Zürich, Switzerland
- Institute of Romance Studies, University of Zurich, Zürichbergstrasse 8, 8032 Zürich, Switzerland
| | - Sebastian Sauppe
- Department of Psychology, University of Zurich, Binzmühlestrasse 14, 8050 Zürich, Switzerland
| | - Martin Meyer
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
| |
Collapse
|
4
|
Zhu M, Chen F, Shi C, Zhang Y. Amplitude envelope onset characteristics modulate phase locking for speech auditory-motor synchronization. Psychon Bull Rev 2024; 31:1661-1669. [PMID: 38227125 DOI: 10.3758/s13423-023-02446-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2023] [Indexed: 01/17/2024]
Abstract
The spontaneous speech-to-speech synchronization (SSS) test has been shown to be an effective behavioral method to estimate cortical speech auditory-motor coupling strength through phase-locking value (PLV) between auditory input and motor output. This study further investigated how amplitude envelope onset variations of the auditory speech signal may influence the speech auditory-motor synchronization. Sixty Mandarin-speaking adults listened to a stream of randomly presented syllables at an increasing speed while concurrently whispering in synchrony with the rhythm of the auditory stimuli whose onset consistency was manipulated, consisting of aspirated, unaspirated, and mixed conditions. The participants' PLVs for the three conditions in the SSS test were derived and compared. Results showed that syllable rise time affected the speech auditory-motor synchronization in a bifurcated fashion. Specifically, PLVs were significantly higher in the temporally more consistent conditions (aspirated or unaspirated) than those in the less consistent condition (mixed) for high synchronizers. In contrast, low synchronizers tended to be immune to the onset consistency. Overall, these results validated how syllable onset consistency in the rise time of amplitude envelope may modulate the strength of speech auditory-motor coupling. This study supports the application of the SSS test to examine individual differences in the integration of perception and production systems, which has implications for those with speech and language disorders that have difficulty with processing speech onset characteristics such as rise time.
Collapse
Affiliation(s)
- Min Zhu
- School of Foreign Languages, Hunan University, Changsha, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China.
| | - Chenxin Shi
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, The University of Minnesota, Twin Cities, MN, USA.
| |
Collapse
|
5
|
Lamekina Y, Titone L, Maess B, Meyer L. Speech Prosody Serves Temporal Prediction of Language via Contextual Entrainment. J Neurosci 2024; 44:e1041232024. [PMID: 38839302 PMCID: PMC11236583 DOI: 10.1523/jneurosci.1041-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 03/08/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024] Open
Abstract
Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we show that the human brain achieves this function through a mechanism termed entrainment. Through entrainment, electrophysiological brain activity maintains and continues contextual rhythms beyond their offset. Our experiment combined exposure to repetitive prosodic contours with the subsequent presentation of visual sentences that either matched or mismatched the duration of the preceding contour. During exposure to prosodic contours, we observed MEG coherence with the contours, which was source-localized to right-hemispheric auditory areas. During the processing of the visual targets, activity at the frequency of the preceding contour was still detectable in the MEG; yet sources shifted to the (left) frontal cortex, in line with a functional inheritance of the rhythmic acoustic context for prediction. Strikingly, when the target sentence was shorter than expected from the preceding contour, an omission response appeared in the evoked potential record. We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. In general, acoustic rhythms appear to endow language for employing the brain's electrophysiological mechanisms of temporal prediction.
Collapse
Affiliation(s)
- Yulia Lamekina
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- Methods and Development Group Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- University Clinic Münster, Münster 48149, Germany
| |
Collapse
|
6
|
Berthault E, Chen S, Falk S, Morillon B, Schön D. Auditory and motor priming of metric structure improves understanding of degraded speech. Cognition 2024; 248:105793. [PMID: 38636164 DOI: 10.1016/j.cognition.2024.105793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/07/2024] [Accepted: 04/09/2024] [Indexed: 04/20/2024]
Abstract
Speech comprehension is enhanced when preceded (or accompanied) by a congruent rhythmic prime reflecting the metrical sentence structure. Although these phenomena have been described for auditory and motor primes separately, their respective and synergistic contribution has not been addressed. In this experiment, participants performed a speech comprehension task on degraded speech signals that were preceded by a rhythmic prime that could be auditory, motor or audiomotor. Both auditory and audiomotor rhythmic primes facilitated speech comprehension speed. While the presence of a purely motor prime (unpaced tapping) did not globally benefit speech comprehension, comprehension accuracy scaled with the regularity of motor tapping. In order to investigate inter-individual variability, participants also performed a Spontaneous Speech Synchronization test. The strength of the estimated perception-production coupling correlated positively with overall speech comprehension scores. These findings are discussed in the framework of the dynamic attending and active sensing theories.
Collapse
Affiliation(s)
- Emma Berthault
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Sophie Chen
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Simone Falk
- Department of Linguistics and Translation, University of Montreal, Canada; International Laboratory for Brain, Music and Sound Research, Montreal, Canada.
| | - Benjamin Morillon
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Daniele Schön
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| |
Collapse
|
7
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
8
|
Sjuls GS, Vulchanova MD, Assaneo MF. Replication of population-level differences in auditory-motor synchronization ability in a Norwegian-speaking population. COMMUNICATIONS PSYCHOLOGY 2023; 1:47. [PMID: 39242904 PMCID: PMC11332004 DOI: 10.1038/s44271-023-00049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 12/05/2023] [Indexed: 09/09/2024]
Abstract
The Speech-to-Speech Synchronization test is a powerful tool in assessing individuals' auditory-motor synchronization ability, namely the ability to synchronize one's own utterances to the rhythm of an external speech signal. Recent studies using the test have revealed that participants fall into two distinct groups-high synchronizers and low synchronizers-with significant differences in their neural (structural and functional) underpinnings and outcomes on several behavioral tasks. Therefore, it is critical to assess the universality of the population-level distribution (indicating two groups rather than a normal distribution) across populations of speakers. Here we demonstrate that the previous results replicate with a Norwegian-speaking population, indicating that the test is generalizable beyond previously tested populations of native English- and German-speakers.
Collapse
Affiliation(s)
- Guro S Sjuls
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology, Department of Language and Literature, Trondheim, Norway.
| | - Mila D Vulchanova
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology, Department of Language and Literature, Trondheim, Norway
| | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Santiago de Querétaro, México
| |
Collapse
|
9
|
Mares C, Echavarría Solana R, Assaneo MF. Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features. Commun Biol 2023; 6:658. [PMID: 37344562 DOI: 10.1038/s42003-023-04976-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 05/24/2023] [Indexed: 06/23/2023] Open
Abstract
The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit's identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the different articulators but critically shaped by the acoustic features of the stimulus and individual abilities.
Collapse
Affiliation(s)
- Cecilia Mares
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico
| | | | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
10
|
Lubinus C, Keitel A, Obleser J, Poeppel D, Rimmele JM. Explaining flexible continuous speech comprehension from individual motor rhythms. Proc Biol Sci 2023; 290:20222410. [PMID: 36855868 PMCID: PMC9975658 DOI: 10.1098/rspb.2022.2410] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience (in Cooperation with Max Planck Society), Frankfurt am Main, Germany
| | - Johanna M. Rimmele
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
11
|
Luo L, Lu L. Studying rhythm processing in speech through the lens of auditory-motor synchronization. Front Neurosci 2023; 17:1146298. [PMID: 36937684 PMCID: PMC10017839 DOI: 10.3389/fnins.2023.1146298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 02/20/2023] [Indexed: 03/06/2023] Open
Abstract
Continuous speech is organized into a hierarchy of rhythms. Accurate processing of this rhythmic hierarchy through the interactions of auditory and motor systems is fundamental to speech perception and production. In this mini-review, we aim to evaluate the implementation of behavioral auditory-motor synchronization paradigms when studying rhythm processing in speech. First, we present an overview of the classic finger-tapping paradigm and its application in revealing differences in auditory-motor synchronization between the typical and clinical populations. Next, we highlight key findings on rhythm hierarchy processing in speech and non-speech stimuli from finger-tapping studies. Following this, we discuss the potential caveats of the finger-tapping paradigm and propose the speech-speech synchronization (SSS) task as a promising tool for future studies. Overall, we seek to raise interest in developing new methods to shed light on the neural mechanisms of speech processing.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychology, Beijing Sport University, Beijing, China
- Laboratory of Sports Stress and Adaptation of General Administration of Sport, Beijing, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, China
- *Correspondence: Lingxi Lu,
| |
Collapse
|
12
|
Lizcano-Cortés F, Gómez-Varela I, Mares C, Wallisch P, Orpella J, Poeppel D, Ripollés P, Assaneo MF. Speech-to-Speech Synchronization protocol to classify human participants as high or low auditory-motor synchronizers. STAR Protoc 2022; 3:101248. [PMID: 35310080 PMCID: PMC8931471 DOI: 10.1016/j.xpro.2022.101248] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).
Collapse
Affiliation(s)
| | | | - Cecilia Mares
- Institute of Neurobiology, UNAM, Querétaro 76230, México
| | - Pascal Wallisch
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Joan Orpella
- Department of Psychology, New York University, New York, NY 10003, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY 10003, USA
- Ernst Struengmann Institute for Neuroscience, 60528 Frankfurt, Germany
- Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA
- Max Plank Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, NY 10003, USA
- Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA
- Max Plank Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
- Music and Audio Research Laboratory (MARL), New York University, New York, NY 11201, USA
| | | |
Collapse
|
13
|
Assaneo MF, Ripollés P, Tichenor SE, Yaruss JS, Jackson ES. The Relationship Between Auditory-Motor Integration, Interoceptive Awareness, and Self-Reported Stuttering Severity. Front Integr Neurosci 2022; 16:869571. [PMID: 35600224 PMCID: PMC9120354 DOI: 10.3389/fnint.2022.869571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/13/2022] [Indexed: 11/13/2022] Open
Abstract
Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker’s perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.
Collapse
Affiliation(s)
- M. Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico
- *Correspondence: M. Florencia Assaneo Eric S. Jackson
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, NY, United States
- Music and Audio Research Lab, New York University, New York, NY, United States
- Center for Music, Language and Emotion, New York University, New York, NY, United States
| | - Seth E. Tichenor
- Department of Speech-Language Pathology, Duquesne University, Pittsburgh, PA, United States
| | - J. Scott Yaruss
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, United States
| | - Eric S. Jackson
- Department of Communicative Sciences and Disorders, New York University, New York, NY, United States
- *Correspondence: M. Florencia Assaneo Eric S. Jackson
| |
Collapse
|
14
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|