1
|
Patel SP, Cole J, Lau JCY, Fragnito G, Losh M. Verbal entrainment in autism spectrum disorder and first-degree relatives. Sci Rep 2022; 12:11496. [PMID: 35798758 PMCID: PMC9262979 DOI: 10.1038/s41598-022-12945-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 05/19/2022] [Indexed: 11/09/2022] Open
Abstract
Entrainment, the unconscious process leading to coordination between communication partners, is an important dynamic human behavior that helps us connect with one another. Difficulty developing and sustaining social connections is a hallmark of autism spectrum disorder (ASD). Subtle differences in social behaviors have also been noted in first-degree relatives of autistic individuals and may express underlying genetic liability to ASD. In-depth examination of verbal entrainment was conducted to examine disruptions to entrainment as a contributing factor to the language phenotype in ASD. Results revealed distinct patterns of prosodic and lexical entrainment in individuals with ASD. Notably, subtler entrainment differences in prosodic and syntactic entrainment were identified in parents of autistic individuals. Findings point towards entrainment, particularly prosodic entrainment, as a key process linked to social communication difficulties in ASD and reflective of genetic liability to ASD.
Collapse
Affiliation(s)
- Shivani P Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Jennifer Cole
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Joseph C Y Lau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Gabrielle Fragnito
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
2
|
Gambi C, Van de Cavey J, Pickering MJ. EXPRESS: Representation of others' synchronous and asynchronous sentences interferes with sentence production. Q J Exp Psychol (Hove) 2022; 76:180-195. [PMID: 35102784 DOI: 10.1177/17470218221080766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In dialogue, people represent each other's utterances in order to take turns and communicate successfully. In previous work [Gambi, C., Van de Cavey, J., & Pickering, M. J. (2015). Interference in joint picture naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(1), 1-21.], speakers who were naming single pictures or picture pairs represented whether another speaker was engaged in the same task (versus a different or no task) concurrently, but did not represent in detail the content of the other speaker's utterance. Here, we investigate co-representation of whole sentences. In three experiments, pairs of speakers imagined each other producing active or passive descriptions of transitive events. Speakers took longer to begin speaking when they believed their partner was also preparing to speak, compared to when they did not. Interference occurred when speakers believed their partners were preparing to speak at the same time as them (synchronous production and co-representation; Experiment 1), and also when speakers believed that their partner would speak only after them (asynchronous production and co-representation; Experiments 2a and 2b). However, interference was generally no greater when speakers believed their partner was preparing a different compared to a similar utterance, providing no consistent evidence that speakers represented what their partners were preparing to say. Taken together, these findings indicate that speakers can represent another's intention to speak even as they are themselves preparing to speak, but that such representation tends to lack detail.
Collapse
Affiliation(s)
- Chiara Gambi
- University of Edinburgh and Cardiff University 2112
| | | | | |
Collapse
|
3
|
Theimann A, Kuzmina E, Hansen P. Verb-Mediated Prediction in Bilingual Toddlers. Front Psychol 2021; 12:719447. [PMID: 34858259 PMCID: PMC8631997 DOI: 10.3389/fpsyg.2021.719447] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 10/18/2021] [Indexed: 11/24/2022] Open
Abstract
Prediction is an important mechanism for efficient language processing. It has been shown that as a part of sentence processing, both children and adults predict nouns based on semantically constraining verbs. Language proficiency is said to modulate prediction: the higher proficiency, the better the predictive skill. Children growing up acquiring two languages are often more proficient in one of them, and as such, investigation of the predictive ability in young bilingual children can shed light on the role of language proficiency. Furthermore, according to production-based models, the language production system drives the predictive ability. The present study investigates whether bilingual toddlers predict upcoming nouns based on verb meanings in both their languages, and whether this ability is associated with expressive vocabulary. Seventeen Norwegian-English bilingual toddlers (aged 2;5-3;3), dominant in Norwegian, participated in the study. Verb-mediated predictive ability was measured via a visual world paradigm (VWP) experiment, including sentences with semantically constraining and neutral verbs. Expressive vocabulary was measured by MacArthur-Bates CDI II. The results suggested that the toddler group predicted upcoming noun arguments in both their dominant and non-dominant languages, but were faster in their dominant language. This finding highlights the importance of language dominance for predictive processing. There was no significant relationship between predictive ability and expressive vocabulary in either language.
Collapse
Affiliation(s)
- Ane Theimann
- Department of Linguistics and Scandinavian Studies, University of Oslo, Oslo, Norway
| | - Ekaterina Kuzmina
- Center for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
| | - Pernille Hansen
- Center for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
- Department of Humanities, Inland Norway University of Applied Sciences, Hamar, Norway
| |
Collapse
|
4
|
Michon M, Boncompte G, López V. Electrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions. Front Hum Neurosci 2020; 14:538619. [PMID: 33192386 PMCID: PMC7653187 DOI: 10.3389/fnhum.2020.538619] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 09/29/2020] [Indexed: 11/13/2022] Open
Abstract
The human brain generates predictions about future events. During face-to-face conversations, visemic information is used to predict upcoming auditory input. Recent studies suggest that the speech motor system plays a role in these cross-modal predictions, however, usually only audio-visual paradigms are employed. Here we tested whether speech sounds can be predicted on the basis of visemic information only, and to what extent interfering with orofacial articulatory effectors can affect these predictions. We registered EEG and employed N400 as an index of such predictions. Our results show that N400's amplitude was strongly modulated by visemic salience, coherent with cross-modal speech predictions. Additionally, N400 ceased to be evoked when syllables' visemes were presented backwards, suggesting that predictions occur only when the observed viseme matched an existing articuleme in the observer's speech motor system (i.e., the articulatory neural sequence required to produce a particular phoneme/viseme). Importantly, we found that interfering with the motor articulatory system strongly disrupted cross-modal predictions. We also observed a late P1000 that was evoked only for syllable-related visual stimuli, but whose amplitude was not modulated by interfering with the motor system. The present study provides further evidence of the importance of the speech production system for speech sounds predictions based on visemic information at the pre-lexical level. The implications of these results are discussed in the context of a hypothesized trimodal repertoire for speech, in which speech perception is conceived as a highly interactive process that involves not only your ears but also your eyes, lips and tongue.
Collapse
Affiliation(s)
- Maëva Michon
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Social, Facultad de Psicología, Universidad Diego Portales, Santiago, Chile
| | - Gonzalo Boncompte
- Laboratorio de Neurodinámicas de la Cognición, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Vladimir López
- Laboratorio de Psicología Experimental, Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
5
|
Mukherjee S, Badino L, Hilt PM, Tomassini A, Inuggi A, Fadiga L, Nguyen N, D'Ausilio A. The neural oscillatory markers of phonetic convergence during verbal interaction. Hum Brain Mapp 2018; 40:187-201. [PMID: 30240542 DOI: 10.1002/hbm.24364] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 07/19/2018] [Accepted: 08/05/2018] [Indexed: 12/12/2022] Open
Abstract
During a conversation, the neural processes supporting speech production and perception overlap in time and, based on context, expectations and the dynamics of interaction, they are also continuously modulated in real time. Recently, the growing interest in the neural dynamics underlying interactive tasks, in particular in the language domain, has mainly tackled the temporal aspects of turn-taking in dialogs. Besides temporal coordination, an under-investigated phenomenon is the implicit convergence of the speakers toward a shared phonetic space. Here, we used dual electroencephalography (dual-EEG) to record brain signals from subjects involved in a relatively constrained interactive task where they were asked to take turns in chaining words according to a phonetic rhyming rule. We quantified participants' initial phonetic fingerprints and tracked their phonetic convergence during the interaction via a robust and automatic speaker verification technique. Results show that phonetic convergence is associated to left frontal alpha/low-beta desynchronization during speech preparation and by high-beta suppression before and during listening to speech in right centro-parietal and left frontal sectors, respectively. By this work, we provide evidence that mutual adaptation of speech phonetic targets, correlates with specific alpha and beta oscillatory dynamics. Alpha and beta oscillatory dynamics may index the coordination of the "when" as well as the "how" speech interaction takes place, reinforcing the suggestion that perception and production processes are highly interdependent and co-constructed during a conversation.
Collapse
Affiliation(s)
- Sankar Mukherjee
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Leonardo Badino
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Pauline M Hilt
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - Alberto Inuggi
- Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy.,Section of Human Physiology, University of Ferrara, Ferrara, Italy
| | - Noël Nguyen
- CNRS, LPL, Aix Marseille University, Aix-en-Provence, France
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy.,Section of Human Physiology, University of Ferrara, Ferrara, Italy
| |
Collapse
|
6
|
Variation in the speech signal as a window into the cognitive architecture of language production. Psychon Bull Rev 2018; 25:1973-2004. [PMID: 29383571 DOI: 10.3758/s13423-017-1423-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The pronunciation of words is highly variable. This variation provides crucial information about the cognitive architecture of the language production system. This review summarizes key empirical findings about variation phenomena, integrating corpus, acoustic, articulatory, and chronometric data from phonetic and psycholinguistic studies. It examines how these data constrain our current understanding of word production processes and highlights major challenges and open issues that should be addressed in future research.
Collapse
|
7
|
Ruch H, Zürcher Y, Burkart JM. The function and mechanism of vocal accommodation in humans and other primates. Biol Rev Camb Philos Soc 2017; 93:996-1013. [PMID: 29111610 DOI: 10.1111/brv.12382] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 09/26/2017] [Accepted: 10/02/2017] [Indexed: 11/30/2022]
Abstract
The study of non-human animals, in particular primates, can provide essential insights into language evolution. A critical element of language is vocal production learning, i.e. learning how to produce calls. In contrast to other lineages such as songbirds, vocal production learning of completely new signals is strikingly rare in non-human primates. An increasing body of research, however, suggests that various species of non-human primates engage in vocal accommodation and adjust the structure of their calls in response to environmental noise or conspecific vocalizations. To date it is unclear what role vocal accommodation may have played in language evolution, in particular because it summarizes a variety of heterogeneous phenomena which are potentially achieved by different mechanisms. In contrast to non-human primates, accommodation research in humans has a long tradition in psychology and linguistics. Based on theoretical models from these research traditions, we provide a new framework which allows comparing instances of accommodation across species, and studying them according to their underlying mechanism and ultimate biological function. We found that at the mechanistic level, many cases of accommodation can be explained with an automatic perception-production link, but some instances arguably require higher levels of vocal control. Functionally, both human and non-human primates use social accommodation to signal social closeness or social distance to a partner or social group. Together, this indicates that not only some vocal control, but also the communicative function of vocal accommodation to signal social closeness and distance must have evolved prior to the emergence of language, rather than being the result of it. Vocal accommodation as found in other primates has thus endowed our ancestors with pre-adaptations that may have paved the way for language evolution.
Collapse
Affiliation(s)
- Hanna Ruch
- University Research Priority Program Language and Space, University of Zurich, 8032, Zürich, Switzerland
| | - Yvonne Zürcher
- Department of Anthropology, University of Zurich, 8057, Zürich, Switzerland
| | - Judith M Burkart
- Department of Anthropology, University of Zurich, 8057, Zürich, Switzerland
| |
Collapse
|
8
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
9
|
Nuttall HE, Kennedy-Higgins D, Devlin JT, Adank P. The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex. Neuropsychologia 2016; 94:13-22. [PMID: 27884757 DOI: 10.1016/j.neuropsychologia.2016.11.016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Revised: 11/18/2016] [Accepted: 11/20/2016] [Indexed: 11/15/2022]
Abstract
Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role for the motor cortex in compensating for differences in hearing ability.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster LA1 4YW, UK; Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Daniel Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
10
|
|
11
|
Martin AE. Language Processing as Cue Integration: Grounding the Psychology of Language in Perception and Neurophysiology. Front Psychol 2016; 7:120. [PMID: 26909051 PMCID: PMC4754405 DOI: 10.3389/fpsyg.2016.00120] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Accepted: 01/22/2016] [Indexed: 12/25/2022] Open
Abstract
I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
Collapse
Affiliation(s)
- Andrea E. Martin
- Department of Psychology, School of Philosophy, Psychology and Language Sciences, University of EdinburghEdinburgh, UK
| |
Collapse
|