1
|
Zhang Y, Ding R, Frassinelli D, Tuomainen J, Klavinskis-Whiting S, Vigliocco G. The role of multimodal cues in second language comprehension. Sci Rep 2023; 13:20824. [PMID: 38012193 PMCID: PMC10682458 DOI: 10.1038/s41598-023-47643-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 11/16/2023] [Indexed: 11/29/2023] Open
Abstract
In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues - meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal - while using multimodal cues to a lesser extent than L1 comprehenders overall.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Rong Ding
- Language and Computation in Neural Systems, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
2
|
Asalıoğlu EN, Göksun T. The role of hand gestures in emotion communication: Do type and size of gestures matter? PSYCHOLOGICAL RESEARCH 2023; 87:1880-1898. [PMID: 36436110 DOI: 10.1007/s00426-022-01774-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 11/28/2022]
Abstract
We communicate emotions in a multimodal way, yet non-verbal emotion communication is a relatively understudied area of research. In three experiments, we investigated the role of gesture characteristics (e.g., type, size in space) on individuals' processing of emotional content. In Experiment 1, participants were asked to rate the emotional intensity of emotional narratives from the videoclips either with iconic or beat gestures. Participants in the iconic gesture condition rated the emotional intensity higher than participants in the beat gesture condition. In Experiment 2, the size of gestures and its interaction with gesture type were investigated in a within-subjects design. Participants again rated the emotional intensity of emotional narratives from the videoclips. Although individuals overall rated narrow gestures more emotionally intense than wider gestures, no effects of gesture type, or gesture size and type interaction were found. Experiment 3 was conducted to check whether findings of Experiment 2 were due to viewing gestures in all videoclips. We compared the gesture and no gesture (i.e., speech only) conditions and showed that there was not a difference between them on emotional ratings. However, we could not replicate the findings related to gesture size of Experiment 2. Overall, these findings indicate the importance of examining gesture's role in emotional contexts and that different gesture characteristics such as size of gestures can be considered in nonverbal communication.
Collapse
Affiliation(s)
- Esma Nur Asalıoğlu
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Tilbe Göksun
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey.
| |
Collapse
|
3
|
Fares M, Pelachaud C, Obin N. Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding. Front Artif Intell 2023; 6:1142997. [PMID: 37377638 PMCID: PMC10291316 DOI: 10.3389/frai.2023.1142997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 05/18/2023] [Indexed: 06/29/2023] Open
Abstract
Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers. We view style as being pervasive; while speaking, it colors the communicative behaviors expressivity while speech content is carried by multimodal signals and text. This disentanglement scheme of content and style allows us to directly infer the style embedding even of a speaker whose data are not part of the training phase, without requiring any further training or fine-tuning. The first goal of our model is to generate the gestures of a source speaker based on the content of two input modalities-Mel spectrogram and text semantics. The second goal is to condition the source speaker's predicted gestures on the multimodal behavior style embedding of a target speaker. The third goal is to allow zero-shot style transfer of speakers unseen during training without re-training the model. Our system consists of two main components: (1) a speaker style encoder network that learns to generate a fixed-dimensional speaker embedding style from a target speaker multimodal data (mel-spectrogram, pose, and text) and (2) a sequence-to-sequence synthesis network that synthesizes gestures based on the content of the input modalities-text and mel-spectrogram-of a source speaker and conditioned on the speaker style embedding. We evaluate that our model is able to synthesize gestures of a source speaker given the two input modalities and transfer the knowledge of target speaker style variability learned by the speaker style encoder to the gesture generation task in a zero-shot setup, indicating that the model has learned a high-quality speaker representation. We conduct objective and subjective evaluations to validate our approach and compare it with baselines.
Collapse
Affiliation(s)
- Mireille Fares
- The Institute of Intelligent Systems and Robotics (ISIR), Sciences et Technologies de la Musique et du Son (STMS), Sorbonne University, Paris, France
| | - Catherine Pelachaud
- Centre National de la Recherche Scientifique (CNRS), The Institute of Intelligent Systems and Robotics (ISIR), Sorbonne University, Paris, France
| | - Nicolas Obin
- Sciences et Technologies de la Musique et du Son (STMS), Sorbonne University, Paris, France
| |
Collapse
|
4
|
Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:340-353. [PMID: 36823247 PMCID: PMC9949912 DOI: 10.3758/s13415-023-01074-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/25/2023]
Abstract
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
Collapse
|
5
|
Perniss P, Vinson D, Vigliocco G. Making Sense of the Hands and Mouth: The Role of "Secondary" Cues to Meaning in British Sign Language and English. Cogn Sci 2021; 44:e12868. [PMID: 32619055 DOI: 10.1111/cogs.12868] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 05/01/2020] [Accepted: 05/06/2020] [Indexed: 01/06/2023]
Abstract
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.
Collapse
Affiliation(s)
| | - David Vinson
- Division of Psychology and Language Sciences, University College London
| | | |
Collapse
|
6
|
Zhang Y, Frassinelli D, Tuomainen J, Skipper JI, Vigliocco G. More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension. Proc Biol Sci 2021; 288:20210500. [PMID: 34284631 PMCID: PMC8292779 DOI: 10.1098/rspb.2021.0500] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/28/2021] [Indexed: 12/27/2022] Open
Abstract
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Experimental Psychology, Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
7
|
Momsen J, Gordon J, Wu YC, Coulson S. Event related spectral perturbations of gesture congruity: Visuospatial resources are recruited for multimodal discourse comprehension. BRAIN AND LANGUAGE 2021; 216:104916. [PMID: 33652372 DOI: 10.1016/j.bandl.2021.104916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/30/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
8
|
He Y, Luell S, Muralikrishnan R, Straube B, Nagels A. Gesture's body orientation modulates the N400 for visual sentences primed by gestures. Hum Brain Mapp 2020; 41:4901-4911. [PMID: 32808721 PMCID: PMC7643362 DOI: 10.1002/hbm.25166] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/16/2020] [Accepted: 07/23/2020] [Indexed: 01/08/2023] Open
Abstract
Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication. To date, despite the emergence of neuroscientific literature on the role of body orientation on hand action perception, limited studies have directly investigated the role of body orientation in the interaction between gesture and language. To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g., raising a hand), followed by visually presented sentences that are either congruent or incongruent with the gesture (e.g., "the mountain is high/low…"). Participants underwent a semantic probe task, judging whether a target word is related or unrelated to the gesture-sentence event. EEG results suggest that, during the perception phase of handgestures, while both frontal and lateral gestures elicited a power decrease in both the alpha (8-12 Hz) and the beta (16-24 Hz) bands, lateral versus frontal gestures elicited reduced power decrease in the beta band, source-located to the medial prefrontal cortex. For sentence comprehension, at the critical word whose meaning is congruent/incongruent with the gesture prime, frontal gestures elicited an N400 effect for gesture-sentence incongruency. More importantly, this incongruency effect was significantly reduced for lateral gestures. These findings suggest that body orientation plays an important role in gesture perception, and that its inferred social-communicative intention may influence gesture-language interaction at semantic level.
Collapse
Affiliation(s)
- Yifei He
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Svenja Luell
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| | - R. Muralikrishnan
- Department of NeuroscienceMax Planck Institute for Empirical AestheticsFrankfurtGermany
| | - Benjamin Straube
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Arne Nagels
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| |
Collapse
|
9
|
Rohrer PL, Delais-Roussarie E, Prieto P. Beat Gestures for Comprehension and Recall: Differential Effects of Language Learners and Native Listeners. Front Psychol 2020; 11:575929. [PMID: 33192882 PMCID: PMC7605175 DOI: 10.3389/fpsyg.2020.575929] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/28/2020] [Indexed: 11/13/2022] Open
Abstract
Previous work has shown how native listeners benefit from observing iconic gestures during speech comprehension tasks of both degraded and non-degraded speech. By contrast, effects of the use of gestures in non-native listener populations are less clear and studies have mostly involved iconic gestures. The current study aims to complement these findings by testing the potential beneficial effects of beat gestures (non-referential gestures which are often used for information- and discourse marking) on language recall and discourse comprehension using a narrative-drawing task carried out by native and non-native listeners. Using a within-subject design, 51 French intermediate learners of English participated in a narrative-drawing task. Each participant was assigned 8 videos to watch, where a native speaker describes the events of a short comic strip. Videos were presented in random order, in four conditions: in Native listening conditions with frequent, naturally-modeled beat gestures, in Native listening conditions without any gesture, in Non-native listening conditions with frequent, naturally-modeled beat gestures, and in Non-native listening conditions without any gesture. Participants watched each video twice and then immediately recreated the comic strip through their own drawings. Participants' drawings were then evaluated for discourse comprehension (via their ability to convey the main goals of the narrative through their drawings) and recall (via the number of gesturally-marked elements in the narration that were included in their drawings). Results showed that for native listeners, beat gestures had no significant effect on either recall or comprehension. In non-native speech, however, beat gestures led to significantly lower comprehension and recall scores. These results suggest that frequent, naturally-modeled beat gestures in longer discourses may increase cognitive load for language learners, resulting in negative effects on both memory and language understanding. These findings add to the growing body of literature that suggests that gesture benefits are not a "one-size-fits-all" solution, but rather may be contingent on factors such as language proficiency and gesture rate, particularly in that whenever beat gestures are repeatedly used in discourse, they inherently lose their saliency as markers of important information.
Collapse
Affiliation(s)
- Patrick Louis Rohrer
- Université de Nantes, UMR 6310, Laboratoire de Linguistique de Nantes (LLING), Nantes, France
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
| | | | - Pilar Prieto
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
10
|
Gesture style can affect the integration of gestures and speech: the evidence from Chinese ERP research. Neuroreport 2020; 31:885-890. [PMID: 32427803 DOI: 10.1097/wnr.0000000000001458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
People often accompany gestures in the process of speaking, but the individual's gesture style is not the same. The present study uses the ambiguity resolution paradigm to explore the influence of two kinds of gesture styles on gesture-verbal comprehension. The study manipulated the gesture styles and meaning types of target words and recorded N400 amplitude. This study found that (1) in the non-grooming condition, compared with the situation where the gesture and semantics are inconsistent, a smaller N400 appears under the consistent condition; (2) in the grooming condition, the grooming gesture will reduce the effect of the iconic gesture on speech understanding. N400 amplitude increased only in the case of matching between dominant meaning gesture and subordinate meaning target word. These results suggest that the communication styles of gestures of different speakers, in the process of speech comprehension, affect how well listeners integrate gestures and language.
Collapse
|
11
|
Alviar C, Dale R, Galati A. Complex Communication Dynamics: Exploring the Structure of an Academic Talk. Cogn Sci 2019; 43:e12718. [PMID: 30900289 DOI: 10.1111/cogs.12718] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 01/25/2019] [Accepted: 02/04/2019] [Indexed: 11/30/2022]
Abstract
Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi-automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articulation rate, fundamental frequency, and intensity). There were consistent but statistically subtle patterns in the use of speech rate, articulation rate, intensity, and body motion across the presentation. Principal component analysis also revealed patterns of system-like covariation among modalities. These findings, although tentative, do suggest that the cognitive system is integrating body, slides, and speech in a coordinated manner during natural language use. Further research is needed to clarify the specific coordination patterns that occur between the different modalities.
Collapse
Affiliation(s)
- Camila Alviar
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles
| | - Rick Dale
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles
| | - Alexia Galati
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles.,Department of Psychology, University of Cyprus
| |
Collapse
|
12
|
Regel S, Gunter TC. Don't Get Me Wrong: ERP Evidence from Cueing Communicative Intentions. Front Psychol 2017; 8:1465. [PMID: 28955258 PMCID: PMC5600996 DOI: 10.3389/fpsyg.2017.01465] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 08/15/2017] [Indexed: 11/29/2022] Open
Abstract
How to make sure that one’s utterances are understood as intended when not facing each other? In order to convey communicative intentions, in digital communication emoticons and pragmatic cues are frequently used. Such cueing becomes even more crucial for implied interpretations (e.g., irony) that cannot be understood literally, but require extra information. Sentences, such as ‘That’s fantastic,’ may achieve either a literal or ironic meaning depending on the contextual constraints. In two experiments using event-related brain potentials (ERPs), we examined the effects of cueing communicative intentions (i.e., by means of quotation marks) on ironic and literal language comprehension. An impact of cueing on language processing was seen as early as 200 ms post-stimulus onset by the emergence of a P300 preceding a sustained positivity for cued irony relative to literal language, while for uncued irony a P200-P600 pattern was obtained. In presence of additional information for ironic intentions, pragmatic reanalysis allowing inferences on the message level may have occured immediately. Moreover, by examining the way of cueing (i.e., ambiguous vs. unambiguous cueing) this type of information for communicative intentions appeared to be only effective when the cues were unambiguous by matching pragmatic conventions. The findings suggest that cueing communicative intentions may immediately affect language comprehension, albeit depending on pragmatic conventions of the cues’ usage.
Collapse
Affiliation(s)
- Stefanie Regel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany.,Department of Neurocognitive Psychology, Humboldt University of BerlinBerlin, Germany
| | - Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| |
Collapse
|