1
|
Gu B, Sun X, Beltrán D, de Vega M. Faces of different socio-cultural identities impact emotional meaning learning for L2 words. Sci Rep 2025; 15:616. [PMID: 39753658 PMCID: PMC11699134 DOI: 10.1038/s41598-024-84347-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/23/2024] [Indexed: 01/06/2025] Open
Abstract
This study investigated how exposure to Caucasian and Chinese faces influences native Mandarin-Chinese speakers' learning of emotional meanings for English L2 words. Participants were presented with English pseudowords repeatedly paired with either Caucasian faces or Chinese faces showing emotions of disgust, sadness, or neutrality as a control baseline. Participants' learning was evaluated through both within-modality (i.e., testing participants with new sets of faces) and cross-modality (i.e., testing participants with sentences expressing the learned emotions) generalization tests. When matching newly learned L2 words with new faces, participants from both groups were more accurate under the neutral condition compared to sad condition. The advantage of neutrality extended to sentences as participants matched newly learned L2 words with neutral sentences more accurately than with both disgusting and sad ones. Differences between the two groups were also found in the cross-modality generalization test in which the Caucasian-face Group outperformed the Chinese-face Group in terms of accuracy in sad trials. However, the Chinese-face Group was more accurate in neutral trials in the same test. We thus conclude that faces of diverse socio-cultural identities exert different impacts on the emotional meaning learning for L2 words.
Collapse
Affiliation(s)
- Beixian Gu
- School of Foreign Languages, Institute for Language and Cognition, Dalian University of Technology, Dalian, China.
| | - Xiaobing Sun
- National Research Centre for Foreign Language Education, Beijing Foreign Studies University, Beijing, China.
| | - David Beltrán
- Psychology Department, Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, La Laguna, Spain
| | - Manuel de Vega
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, La Laguna, Spain
| |
Collapse
|
2
|
Becker C, Conduit R, Chouinard PA, Laycock R. EEG correlates of static and dynamic face perception: The role of naturalistic motion. Neuropsychologia 2024; 205:108986. [PMID: 39218391 DOI: 10.1016/j.neuropsychologia.2024.108986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 08/09/2024] [Accepted: 08/28/2024] [Indexed: 09/04/2024]
Abstract
Much of our understanding of how the brain processes dynamic faces comes from research that compares static photographs to dynamic morphs, which exhibit simplified, computer-generated motion. By comparing static, video recorded, and dynamic morphed expressions, we aim to identify the neural correlates of naturalistic facial dynamism, using time-domain and time-frequency analysis. Dynamic morphs were made from the neutral and peak frames of video recorded transitions of happy and fearful expressions, which retained expression change and removed asynchronous and non-linear features of naturalistic facial motion. We found that dynamic morphs elicited increased N400 amplitudes and lower LPP amplitudes compared to other stimulus types. Video recordings elicited higher LPP amplitudes and greater frontal delta activity compared to other stimuli. Thematic analysis of participant interviews using a large language model revealed that participants found it difficult to assess the genuineness of morphed expressions, and easier to analyse the genuineness of happy compared to fearful expressions. Our findings suggest that animating real faces with artificial motion may violate expectations (N400) and reduce the social salience (LPP) of dynamic morphs. Results also suggest that delta oscillations in the frontal region may be involved with the perception of naturalistic facial motion in happy and fearful expressions. Overall, our findings highlight the sensitivity of neural mechanisms required for face perception to subtle changes in facial motion characteristics, which has important implications for neuroimaging research using faces with simplified motion.
Collapse
Affiliation(s)
- Casey Becker
- RMIT University, School of Health & Biomedical Sciences, STEM College, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| | - Russell Conduit
- RMIT University, School of Health & Biomedical Sciences, STEM College, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| | - Philippe A Chouinard
- La Trobe University, Department of Psychology, Counselling, & Therapy, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia.
| | - Robin Laycock
- RMIT University, School of Health & Biomedical Sciences, STEM College, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| |
Collapse
|
3
|
Jiménez-Ortega L, Casado-Palacios M, Rubianes M, Martínez-Mejias M, Casado P, Fondevila S, Hernández-Gutiérrez D, Muñoz F, Sánchez-García J, Martín-Loeches M. The bigger your pupils, the better my comprehension: an ERP study of how pupil size and gaze of the speaker affect syntactic processing. Soc Cogn Affect Neurosci 2024; 19:nsae047. [PMID: 38918898 PMCID: PMC11246839 DOI: 10.1093/scan/nsae047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/16/2024] [Accepted: 06/25/2024] [Indexed: 06/27/2024] Open
Abstract
Gaze direction and pupil dilation play a critical role in communication and social interaction due to their ability to redirect and capture our attention and their relevance for emotional information. The present study aimed to explore whether the pupil size and gaze direction of the speaker affect language comprehension. Participants listened to sentences that could be correct or contain a syntactic anomaly, while the static face of a speaker was manipulated in terms of gaze direction (direct, averted) and pupil size (mydriasis, miosis). Left anterior negativity (LAN) and P600 linguistic event-related potential components were observed in response to syntactic anomalies across all conditions. The speaker's gaze did not impact syntactic comprehension. However, the amplitude of the LAN component for mydriasis (dilated pupil) was larger than for miosis (constricted pupil) condition. Larger pupils are generally associated with care, trust, interest, and attention, which might facilitate syntactic processing at early automatic stages. The result also supports the permeable and context-dependent nature of syntax. Previous studies also support an automatic nature of syntax (fast and efficient), which combined with the permeability to relevant sources of communicative information, such as pupil size and emotions, is highly adaptive for language comprehension and social interaction.
Collapse
Affiliation(s)
- Laura Jiménez-Ortega
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
| | - María Casado-Palacios
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- DIBRIS, University of Genoa, Genoa 16145, Italy
- UVIP – Unit for Visually Impaired People, Italian Institute of Technology, Genova 16164, Italy
| | - Miguel Rubianes
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
- Facultad de Ciencias de la Salud UNIE Universidad, Madrid 28015, Spain
| | - Mario Martínez-Mejias
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
| | - Pilar Casado
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
| | - Sabela Fondevila
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
| | - David Hernández-Gutiérrez
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastián 20009, Spain
| | - Francisco Muñoz
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
| | - José Sánchez-García
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Facultad de Psicología, Universidad Internacional de la Rioja UNIR, Oviedo, Asturias 33003, Spain
| | - Manuel Martín-Loeches
- Cognitive Neuroscience Section, UCM-ISCIII Center for Human Evolution and Behavior, Madrid 28029, Spain
- Department of Psychobiology & Behavioral Sciences Methods, Complutense University of Madrid, Madrid 28040, Spain
| |
Collapse
|
4
|
Gastaldon S, Bonfiglio N, Vespignani F, Peressotti F. Predictive language processing: integrating comprehension and production, and what atypical populations can tell us. Front Psychol 2024; 15:1369177. [PMID: 38836235 PMCID: PMC11148270 DOI: 10.3389/fpsyg.2024.1369177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/06/2024] [Indexed: 06/06/2024] Open
Abstract
Predictive processing, a crucial aspect of human cognition, is also relevant for language comprehension. In everyday situations, we exploit various sources of information to anticipate and therefore facilitate processing of upcoming linguistic input. In the literature, there are a variety of models that aim at accounting for such ability. One group of models propose a strict relationship between prediction and language production mechanisms. In this review, we first introduce very briefly the concept of predictive processing during language comprehension. Secondly, we focus on models that attribute a prominent role to language production and sensorimotor processing in language prediction ("prediction-by-production" models). Contextually, we provide a summary of studies that investigated the role of speech production and auditory perception on language comprehension/prediction tasks in healthy, typical participants. Then, we provide an overview of the limited existing literature on specific atypical/clinical populations that may represent suitable testing ground for such models-i.e., populations with impaired speech production and auditory perception mechanisms. Ultimately, we suggest a more widely and in-depth testing of prediction-by-production accounts, and the involvement of atypical populations both for model testing and as targets for possible novel speech/language treatment approaches.
Collapse
Affiliation(s)
- Simone Gastaldon
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Noemi Bonfiglio
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- BCBL-Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
| | - Francesco Vespignani
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Centro Interdipartimentale di Ricerca "I-APPROVE-International Auditory Processing Project in Venice", University of Padua, Padua, Italy
| | - Francesca Peressotti
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Centro Interdipartimentale di Ricerca "I-APPROVE-International Auditory Processing Project in Venice", University of Padua, Padua, Italy
| |
Collapse
|
5
|
Galazka MA, Hadjikhani N, Sundqvist M, Åsberg Johnels J. Facial speech processing in children with and without dyslexia. ANNALS OF DYSLEXIA 2021; 71:501-524. [PMID: 34115279 PMCID: PMC8458188 DOI: 10.1007/s11881-021-00231-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 06/04/2023]
Abstract
What role does the presence of facial speech play for children with dyslexia? Current literature proposes two distinctive claims. One claim states that children with dyslexia make less use of visual information from the mouth during speech processing due to a deficit in recruitment of audiovisual areas. An opposing claim suggests that children with dyslexia are in fact reliant on such information in order to compensate for auditory/phonological impairments. The current paper aims at directly testing these contrasting hypotheses (here referred to as "mouth insensitivity" versus "mouth reliance") in school-age children with and without dyslexia, matched on age and listening comprehension. Using eye tracking, in Study 1, we examined how children look at the mouth across conditions varying in speech processing demands. The results did not indicate significant group differences in looking at the mouth. However, correlation analyses suggest potentially important distinctions within the dyslexia group: those children with dyslexia who are better readers attended more to the mouth while presented with a person's face in a phonologically demanding condition. In Study 2, we examined whether the presence of facial speech cues is functionally beneficial when a child is encoding written words. The results indicated lack of overall group differences on the task, although those with less severe reading problems in the dyslexia group were more accurate when reading words that were presented with articulatory facial speech cues. Collectively, our results suggest that children with dyslexia differ in their "mouth reliance" versus "mouth insensitivity," a profile that seems to be related to the severity of their reading problems.
Collapse
Affiliation(s)
- Martyna A Galazka
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Harvard Medical School/MGH/MIT, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
| | - Maria Sundqvist
- Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
6
|
Zhang Y, Frassinelli D, Tuomainen J, Skipper JI, Vigliocco G. More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension. Proc Biol Sci 2021; 288:20210500. [PMID: 34284631 PMCID: PMC8292779 DOI: 10.1098/rspb.2021.0500] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/28/2021] [Indexed: 12/27/2022] Open
Abstract
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Experimental Psychology, Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
7
|
Hernández-Gutiérrez D, Muñoz F, Sánchez-García J, Sommer W, Abdel Rahman R, Casado P, Jiménez-Ortega L, Espuny J, Fondevila S, Martín-Loeches M. Situating language in a minimal social context: how seeing a picture of the speaker's face affects language comprehension. Soc Cogn Affect Neurosci 2021; 16:502-511. [PMID: 33470410 PMCID: PMC8094999 DOI: 10.1093/scan/nsab009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 11/29/2020] [Accepted: 01/19/2021] [Indexed: 11/14/2022] Open
Abstract
Natural use of language involves at least two individuals. Some studies have focused on the interaction between senders in communicative situations and how the knowledge about the speaker can bias language comprehension. However, the mere effect of a face as a social context on language processing remains unknown. In the present study, we used event-related potentials to investigate the semantic and morphosyntactic processing of speech in the presence of a photographic portrait of the speaker. In Experiment 1, we show that the N400, a component related to semantic comprehension, increased its amplitude when processed within this minimal social context compared to a scrambled face control condition. Hence, the semantic neural processing of speech is sensitive to the concomitant perception of a picture of the speaker's face, even if irrelevant to the content of the sentences. Moreover, a late posterior negativity effect was found to the presentation of the speaker's face compared to control stimuli. In contrast, in Experiment 2, we found that morphosyntactic processing, as reflected in left anterior negativity and P600 effects, is not notably affected by the presence of the speaker's portrait. Overall, the present findings suggest that the mere presence of the speaker's image seems to trigger a minimal communicative context, increasing processing resources for language comprehension at the semantic level.
Collapse
Affiliation(s)
- David Hernández-Gutiérrez
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
| | - Francisco Muñoz
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
- Department of Psychobiology & Methods for the Behavioural Sciences, Complutense University of Madrid, Madrid 28040, Spain
| | - Jose Sánchez-García
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
| | - Werner Sommer
- Department of Psychology, Humboldt Universität zu Berlin, Berlin 10117, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt Universität zu Berlin, Berlin 10117, Germany
| | - Pilar Casado
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
- Department of Psychobiology & Methods for the Behavioural Sciences, Complutense University of Madrid, Madrid 28040, Spain
| | - Laura Jiménez-Ortega
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
- Department of Psychobiology & Methods for the Behavioural Sciences, Complutense University of Madrid, Madrid 28040, Spain
| | - Javier Espuny
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
| | - Sabela Fondevila
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
- Department of Psychobiology & Methods for the Behavioural Sciences, Complutense University of Madrid, Madrid 28040, Spain
| | - Manuel Martín-Loeches
- Cognitive Neuroscience Section, Center UCM-ISCIII for Human Evolution and Behaviour, Madrid 28029, Spain
- Department of Psychobiology & Methods for the Behavioural Sciences, Complutense University of Madrid, Madrid 28040, Spain
| |
Collapse
|