1
|
Exploring empathic engagement in immersive media: An EEG study on mu rhythm suppression in VR. PLoS One 2024; 19:e0303553. [PMID: 38758939 PMCID: PMC11101072 DOI: 10.1371/journal.pone.0303553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 04/27/2024] [Indexed: 05/19/2024] Open
Abstract
This study investigates the influence of immersive media, particularly Virtual Reality (VR), on empathic responses, in comparison to traditional television (TV), using electroencephalography (EEG). We employed mu rhythm suppression as a measurable neural marker to gauge empathic engagement, as its increase generally signifies heightened empathic responses. Our findings exhibit a greater mu rhythm suppression in VR conditions compared to TV conditions, suggesting a potential enhancement in empathic responses with VR. Furthermore, our results revealed that the strength of empathic responses was not confined to specific actions depicted in the video clips, underscoring the possibility of broader implications. This research contributes to the ongoing discourse on the effects of different media environments on empathic engagement, particularly emphasizing the unique role of immersive technologies such as VR. It invites further investigation into how such technologies can shape and potentially enhance the empathic experience.
Collapse
|
2
|
Language and gesture neural correlates: A meta-analysis of functional magnetic resonance imaging studies. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:902-912. [PMID: 37971416 DOI: 10.1111/1460-6984.12987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Humans often use co-speech gestures to promote effective communication. Attention has been paid to the cortical areas engaged in the processing of co-speech gestures. AIMS To investigate the neural network underpinned in the processing of co-speech gestures and to observe whether there is a relationship between areas involved in language and gesture processing. METHODS & PROCEDURES We planned to include studies with neurotypical and/or stroke participants who underwent a bimodal task (i.e., processing of co-speech gestures with relative speech) and a unimodal task (i.e., speech or gesture alone) during a functional magnetic resonance imaging (fMRI) session. After a database search, abstract and full-text screening were conducted. Qualitative and quantitative data were extracted, and a meta-analysis was performed with the software GingerALE 3.0.2, performing contrast analyses of uni- and bimodal tasks. MAIN CONTRIBUTION The database search produced 1024 records. After the screening process, 27 studies were included in the review. Data from 15 studies were quantitatively analysed through meta-analysis. Meta-analysis found three clusters with a significant activation of the left middle frontal gyrus and inferior frontal gyrus, and bilateral middle occipital gyrus and inferior temporal gyrus. CONCLUSIONS There is a close link at the neural level for the semantic processing of auditory and visual information during communication. These findings encourage the integration of the use of co-speech gestures during aphasia treatment as a strategy to foster the possibility to communicate effectively for people with aphasia. WHAT THIS PAPER ADDS What is already known on this subject Gestures are an integral part of human communication, and they may have a relationship at neural level with speech processing. What this paper adds to the existing knowledge During processing of bi- and unimodal communication, areas related to semantic processing and multimodal processing are activated, suggesting that there is a close link between co-speech gestures and spoken language at a neural level. What are the potential or actual clinical implications of this work? Knowledge of the functions related to gesture and speech processing neural networks will allow for the adoption of model-based neurorehabilitation programs to foster recovery from aphasia by strengthening the specific functions of these brain networks.
Collapse
|
3
|
The visuo-sensorimotor substrate of co-speech gesture processing. Neuropsychologia 2023; 190:108697. [PMID: 37827428 DOI: 10.1016/j.neuropsychologia.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/14/2023]
Abstract
Co-speech gestures are integral to human communication and exhibit diverse forms, each serving a distinct communication function. However, existing literature has focused on individual gesture types, leaving a gap in understanding the comparative neural processing of these diverse forms. To address this, our study investigated the neural processing of two types of iconic gestures: those representing attributes or event knowledge of entity concepts, beat gestures enacting rhythmic manual movements without semantic information, and self-adaptors. During functional magnetic resonance imaging, systematic randomization and attentive observation of video stimuli revealed a general neural substrate for co-speech gesture processing primarily in the bilateral middle temporal and inferior parietal cortices, characterizing visuospatial attention, semantic integration of cross-modal information, and multisensory processing of manual and audiovisual inputs. Specific types of gestures and grooming movements elicited distinct neural responses. Greater activity in the right supramarginal and inferior frontal regions was specific to self-adaptors, and is relevant to the spatiomotor and integrative processing of speech and gestures. The semantic and sensorimotor regions were least active for beat gestures. The processing of attribute gestures was most pronounced in the left posterior middle temporal gyrus upon access to knowledge of entity concepts. This fMRI study illuminated the neural underpinnings of gesture-speech integration and highlighted the differential processing pathways for various co-speech gestures.
Collapse
|
4
|
Embodying Language through Gestures: Residuals of Motor Memories Modulate Motor Cortex Excitability during Abstract Words Comprehension. SENSORS (BASEL, SWITZERLAND) 2022; 22:7734. [PMID: 36298083 PMCID: PMC9610064 DOI: 10.3390/s22207734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/04/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
There is a debate about whether abstract semantics could be represented in a motor domain as concrete language. A contextual association with a motor schema (action or gesture) seems crucial to highlighting the motor system involvement. The present study with transcranial magnetic stimulation aimed to assess motor cortex excitability changes during abstract word comprehension after conditioning word reading to a gesture execution with congruent or incongruent meaning. Twelve healthy volunteers were engaged in a lexical-decision task responding to abstract words or meaningless verbal stimuli. Motor cortex (M1) excitability was measured at different after-stimulus intervals (100, 250, or 500 ms) before and after an associative-learning training where the execution of the gesture followed word processing. Results showed a significant post-training decrease in hand motor evoked potentials at an early processing stage (100 ms) in correspondence to words congruent with the gestures presented during the training. We hypothesized that traces of individual semantic memory, combined with training effects, induced M1 inhibition due to the redundancy of evoked motor representation. No modulation of cortical excitability was found for meaningless or incongruent words. We discuss data considering the possible implications in research to understand the neural basis of language development and language rehabilitation protocols.
Collapse
|
5
|
Grounding abstract concepts and beliefs into experience: The embodied perspective. Front Psychol 2022; 13:943765. [PMID: 35941951 PMCID: PMC9356303 DOI: 10.3389/fpsyg.2022.943765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 07/01/2022] [Indexed: 11/13/2022] Open
|
6
|
Different Neural Activities for Actions and Language within the Shared Brain Regions: Evidence from Action and Verb Generation. Behav Sci (Basel) 2022; 12:bs12070243. [PMID: 35877314 PMCID: PMC9312291 DOI: 10.3390/bs12070243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 07/11/2022] [Accepted: 07/19/2022] [Indexed: 01/27/2023] Open
Abstract
The Inferior Frontal Gyrus, Premotor Cortex and Inferior Parietal Lobe were suggested to be involved in action and language processing. However, the patterns of neural activities in the shared neural regions are still unclear. This study designed an fMRI experiment to analyze the neural activity associations between action and verb generation for object nouns. Using noun reading as a control task, we compared the differences and similarities of brain regions activated by action and verb generation. The results showed that the action generation task activated more in the dorsal Premotor Cortex (PMC), parts of the midline of PMC and the left Inferior Parietal Lobe (IPL) than the verb generation task. Subregions in the bilateral Supplementary Motor Area (SMA) and the left Inferior Frontal Gyrus (IFG) were found to be shared by action and verb generation. Then, mean activation level analysis and multi-voxel pattern analysis (MVPA) were performed in the overlapping activation regions of two generation tasks in the shared regions. The bilateral SMA and the left IFG were found to have overlapping activations with action and verb generation. All the shared regions were found to have different activation patterns, and the mean activation levels of the shared regions in the bilateral of SMA were significantly higher in the action generation. Based on the function of these brain regions, it can be inferred that the shared regions in the bilateral SMA and the left IFG process action and language generation in a task-specific and intention-specific manner, respectively.
Collapse
|
7
|
Different Neural Information Flows Affected by Activity Patterns for Action and Verb Generation. Front Psychol 2022; 13:802756. [PMID: 35401310 PMCID: PMC8987928 DOI: 10.3389/fpsyg.2022.802756] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 03/03/2022] [Indexed: 12/02/2022] Open
Abstract
Shared brain regions have been found for processing action and language, including the left inferior frontal gyrus (IFG), the premotor cortex (PMC), and the inferior parietal lobule (IPL). However, in the context of action and language generation that shares the same action semantics, it is unclear whether the activity patterns within the overlapping brain regions would be the same. The changes in effective connectivity affected by these activity patterns are also unclear. In this fMRI study, participants were asked to perform hand action and verb generation tasks toward object pictures. We identified shared and specific brain regions for the two tasks in the left PMC, IFG, and IPL. The mean activation level and multi-voxel pattern analysis revealed that the activity patterns in the shared sub-regions were distinct for the two tasks. The dynamic causal modeling results demonstrated that the information flows for the two tasks were different across the shared sub-regions. These results provided the first neuroimaging evidence that the action and verb generation were task context driven in the shared regions, and the distinct patterns of neural information flow across the PMC-IFG-IPL neural network were affected by the polymodal processing in the shared regions.
Collapse
|
8
|
The Inhibition Effect of Affordances in Action Picture Naming: An ERP Study. J Cogn Neurosci 2022; 34:951-966. [PMID: 35303083 DOI: 10.1162/jocn_a_01847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
How quickly are different kinds of conceptual knowledge activated in action picture naming? Using a masked priming paradigm, we manipulated the prime category type (artificial vs. natural), prime action type (precision, power, vs. neutral grip), and target action type (precision vs. power grip) in action picture naming, while electrophysiological signals were measured concurrently. Naming latencies showed an inhibition effect in the congruent action type condition compared with the neutral condition. ERP results showed that artificial and natural category primes induced smaller waveforms in precision or power action primes than neutral primes in the time window of 100-200 msec. Time-frequency results consistently presented a power desynchronization of the mu rhythm in the time window of 0-210 msec with precision action type artificial objects compared with neutral primes, which localized at the supplementary motor, precentral and postcentral areas in the left hemisphere. These findings suggest an inhibitory effect of affordances arising at conceptual preparation in action picture naming and provide evidence for embodied cognition.
Collapse
|
9
|
Evidence for the Concreteness of Abstract Language: A Meta-Analysis of Neuroimaging Studies. Brain Sci 2021; 12:32. [PMID: 35053776 PMCID: PMC8773921 DOI: 10.3390/brainsci12010032] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/06/2021] [Accepted: 12/13/2021] [Indexed: 11/17/2022] Open
Abstract
The neural mechanisms subserving the processing of abstract concepts remain largely debated. Even within the embodiment theoretical framework, most authors suggest that abstract concepts are coded in a linguistic propositional format, although they do not completely deny the role of sensorimotor and emotional experiences in coding it. To our knowledge, only one recent proposal puts forward that the processing of concrete and abstract concepts relies on the same mechanisms, with the only difference being in the complexity of the underlying experiences. In this paper, we performed a meta-analysis using the Activation Likelihood Estimates (ALE) method on 33 functional neuroimaging studies that considered activations related to abstract and concrete concepts. The results suggest that (1) concrete and abstract concepts share the recruitment of the temporo-fronto-parietal circuits normally involved in the interactions with the physical world, (2) processing concrete concepts recruits fronto-parietal areas better than abstract concepts, and (3) abstract concepts recruit Broca's region more strongly than concrete ones. Based on anatomical and physiological evidence, Broca's region is not only a linguistic region mainly devoted to speech production, but it is endowed with complex motor representations of different biological effectors. Hence, we propose that the stronger recruitment of this region for abstract concepts is expression of the complex sensorimotor experiences underlying it, rather than evidence of a purely linguistic format of its processing.
Collapse
|
10
|
What modulates the Mirror Neuron System during action observation?: Multiple factors involving the action, the actor, the observer, the relationship between actor and observer, and the context. Prog Neurobiol 2021; 205:102128. [PMID: 34343630 DOI: 10.1016/j.pneurobio.2021.102128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 06/23/2021] [Accepted: 07/29/2021] [Indexed: 01/03/2023]
Abstract
Seeing an agent perform an action typically triggers a motor simulation of that action in the observer's Mirror Neuron System (MNS). Over the past few years, it has become increasingly clear that during action observation the patterns and strengths of responses in the MNS are modulated by multiple factors. The first aim of this paper is therefore to provide the most comprehensive survey to date of these factors. To that end, 22 distinct factors are described, broken down into the following sets: six involving the action; two involving the actor; nine involving the observer; four involving the relationship between actor and observer; and one involving the context. The second aim is to consider the implications of these findings for four prominent theoretical models of the MNS: the Direct Matching Model; the Predictive Coding Model; the Value-Driven Model; and the Associative Model. These assessments suggest that although each model is supported by a wide range of findings, each one is also challenged by other findings and relatively unaffected by still others. Hence, there is now a pressing need for a richer, more inclusive model that is better able to account for all of the modulatory factors that have been identified so far.
Collapse
|
11
|
The unique role of parietal cortex in action observation: Functional organization for communicative and manipulative actions. Neuroimage 2021; 237:118220. [PMID: 34058335 PMCID: PMC8285591 DOI: 10.1016/j.neuroimage.2021.118220] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 05/25/2021] [Accepted: 05/27/2021] [Indexed: 11/16/2022] Open
Abstract
Action observation is supported by a network of regions in occipito-temporal, parietal, and premotor cortex in primates. Recent research suggests that the parietal node has regions dedicated to different action classes including manipulation, interpersonal interactions, skin displacement, locomotion, and climbing. The goals of the current study consist of: 1) extending this work with new classes of actions that are communicative and specific to humans, 2) investigating how parietal cortex differs from the occipito-temporal and premotor cortex in representing action classes. Human subjects underwent fMRI scanning while observing three action classes: indirect communication, direct communication, and manipulation, plus two types of control stimuli, static controls which were static frames from the video clips, and dynamic controls consisting of temporally-scrambled optic flow information. Using univariate analysis, MVPA, and representational similarity analysis, our study presents several novel findings. First, we provide further evidence for the anatomical segregation in parietal cortex of different action classes: We have found a new site that is specific for representing human-specific indirect communicative actions in cytoarchitectonic parietal area PFt. Second, we found that the discriminability between action classes was higher in parietal cortex than the other two levels suggesting the coding of action identity information at this level. Finally, our results advocate the use of the control stimuli not just for univariate analysis of complex action videos but also when using multivariate techniques.
Collapse
|
12
|
Event related spectral perturbations of gesture congruity: Visuospatial resources are recruited for multimodal discourse comprehension. BRAIN AND LANGUAGE 2021; 216:104916. [PMID: 33652372 DOI: 10.1016/j.bandl.2021.104916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/30/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.
Collapse
|
13
|
From Observed Action Identity to Social Affordances. Trends Cogn Sci 2021; 25:493-505. [PMID: 33745819 DOI: 10.1016/j.tics.2021.02.012] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 02/24/2021] [Accepted: 02/27/2021] [Indexed: 01/08/2023]
Abstract
Others' observed actions cause continuously changing retinal images, making it challenging to build neural representations of action identity. The monkey anterior intraparietal area (AIP) and its putative human homologue (phAIP) host neurons selective for observed manipulative actions (OMAs). The neuronal activity of both AIP and phAIP allows a stable readout of OMA identity across visual formats, but human neurons exhibit greater invariance and generalize from observed actions to action verbs. These properties stem from the convergence in AIP of superior temporal signals concerning: (i) observed body movements; and (ii) the changes in the body-object relationship. We propose that evolutionarily preserved mechanisms underlie the specification of observed-actions identity and the selection of motor responses afforded by them, thereby promoting social behavior.
Collapse
|
14
|
Functional near-infrared spectroscopy in toddlers: Neural differentiation of communicative cues and relation to future language abilities. Dev Sci 2020; 23:e12948. [PMID: 32048419 PMCID: PMC7685129 DOI: 10.1111/desc.12948] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 02/06/2020] [Accepted: 02/07/2020] [Indexed: 12/18/2022]
Abstract
The toddler and preschool years are a time of significant development in both expressive and receptive communication abilities. However, little is known about the neurobiological underpinnings of language development during this period, likely due to difficulties acquiring functional neuroimaging data. Functional near‐infrared spectroscopy (fNIRS) is a motion‐tolerant neuroimaging technique that assesses cortical brain activity and can be used in very young children. Here, we use fNIRS during perception of communicative and noncommunicative speech and gestures in typically developing 2‐ and 3‐year‐olds (Study 1, n = 15, n = 12 respectively) and in a sample of 2‐year‐olds with both fNIRS data collected at age 2 and language outcome data at age 3 (Study 2, n = 18). In Study 1, 2‐ and 3‐year‐olds differentiated between communicative and noncommunicative stimuli as well as between speech and gestures in the left lateral frontal region. However, 2‐year‐olds showed different patterns of activation from 3‐year‐olds in right medial frontal regions. In Study 2, which included two toddlers identified with early language delays along with 16 typically developing toddlers, neural differentiation of communicative stimuli in the right medial frontal region at age 2 predicted receptive language at age 3. Specifically, after accounting for variance related to verbal ability at age 2, increased neural activation for communicative gestures (vs. both communicative speech and noncommunicative gestures) at age 2 predicted higher receptive language scores at age 3. These results are discussed in the context of the underlying mechanisms of toddler language development and use of fNIRS in prediction of language outcomes.
Collapse
|
15
|
Abstract
Recent years have witnessed a growing interest in behavioral and neuroimaging studies on the processing of symbolic communicative gestures, such as pantomimes and emblems, but well-controlled stimuli have been scarce. This study describes a dataset of more than 200 video clips of an actress performing pantomimes (gestures that mimic object-directed/object-use actions; e.g., playing guitar), emblems (conventional gestures; e.g., thumbs up), and meaningless gestures. Gestures were divided into four lists. For each of these four lists, 50 Italian and 50 American raters judged the meaningfulness of the gestures and provided names and descriptions for them. The results of these rating and norming measures are reported separately for the Italian and American raters, offering the first normed set of meaningful and meaningless gestures for experimental studies. The stimuli are available for download via the Figshare database.
Collapse
|
16
|
Language, Gesture, and Emotional Communication: An Embodied View of Social Interaction. Front Psychol 2019; 10:2063. [PMID: 31607974 PMCID: PMC6769117 DOI: 10.3389/fpsyg.2019.02063] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 08/26/2019] [Indexed: 11/13/2022] Open
Abstract
Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from manual gestures, gradually incorporating motor acts with vocal elements. In this evolutionary context, the human mirror mechanism (MM) would permit the passage from “doing something” to “communicating it to someone else.” In this perspective, the MM would mediate semantic processes being involved in both the execution and in the understanding of messages expressed by words or gestures. Thus, the recognition of action related words would activate somatosensory regions, reflecting the semantic grounding of these symbols in action information. Here, the role of the sensorimotor cortex and in general of the human MM on both language perception and understanding is addressed, focusing on recent studies on the integration between symbolic gestures and speech. We conclude documenting some evidence about MM in coding also the emotional aspects conveyed by manual, facial and body signals during communication, and how they act in concert with language to modulate other’s message comprehension and behavior, in line with an “embodied” and integrated view of social interaction.
Collapse
|
17
|
Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
|
18
|
The Large-Scale Organization of Gestures and Words in the Middle Temporal Gyrus. J Neurosci 2019; 39:5966-5974. [PMID: 31126999 DOI: 10.1523/jneurosci.2668-18.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 05/02/2019] [Accepted: 05/05/2019] [Indexed: 11/21/2022] Open
Abstract
The middle temporal gyrus (MTG) has been shown to be recruited during the processing of words, but also during the observation of actions. Here we investigated how information related to words and gestures is organized along the MTG. To this aim, we measured the BOLD response in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female). Gestures consisted of videos of an actress performing object-use pantomimes (iconic representations of object-directed actions; e.g., playing guitar), emblems (conventional gestures, e.g., thumb up), and meaningless gestures. Word stimuli (verbs, nouns) consisted of video clips of the same actress pronouncing words. We found a stronger response to meaningful compared with meaningless gestures along the whole left and large portions of the right MTG. Importantly, we observed a gradient, with posterior regions responding more strongly to gestures (pantomimes and emblems) than words and anterior regions showing a stronger response to words than gestures. In an intermediate region in the left hemisphere, the response was significantly higher to words and emblems (i.e., items with a greater arbitrariness of the sign-to-meaning mapping) than to pantomimes. These results show that the large-scale organization of information in the MTG is driven by the input modality and may also reflect the arbitrariness of the relationship between sign and meaning.SIGNIFICANCE STATEMENT Here we investigated the organizing principle of information in the middle temporal gyrus, taking into consideration the input-modality and the arbitrariness of the relationship between a sign and its meaning. We compared the middle temporal gyrus response during the processing of pantomimes, emblems, and spoken words. We found that posterior regions responded more strongly to pantomimes and emblems than to words, whereas anterior regions responded more strongly to words than to pantomimes and emblems. In an intermediate region, only in the left hemisphere, words and emblems evoked a stronger response than pantomimes. Our results identify two organizing principles of neural representation: the modality of communication (gestural or verbal) and the (arbitrariness of the) relationship between sign and meanings.
Collapse
|
19
|
The concreteness of abstract language: an ancient issue and a new perspective. Brain Struct Funct 2019; 224:1385-1401. [PMID: 30830283 DOI: 10.1007/s00429-019-01851-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 02/20/2019] [Indexed: 12/19/2022]
Abstract
This paper addresses the debated issue of abstract language in the framework of embodiment. First, we discuss the notion of abstractness in the light of the Western philosophical thought, with a focus on the English empiricist tradition. Second, we review the most relevant psychological models and neuroscientific empirical findings on abstract language. It turns out that abstract words are not such, because their meaning is "far from experience", but, because of the high complexity of the attached experiential clusters. Finally, we spell out the consequences of this understanding of abstractness in relation to the neural mechanisms subserving abstract language processing. If abstract words, as compared to concrete ones, imply an increasing complexity of the associated experiential clusters, then the processing of abstract language relies on the recruitment of several neural substrates coding for those experiences. We forward that, at the neural level, this complexity is coded by means of three main mechanisms: (1) the recruitment of the motor representations of different biological effectors (abstract meaning as effector-unspecific); (2) the recruitment of different systems, including sensory, motor, and emotional ones (abstract meaning as multi-systemic); (3) the recruitment of neural substrates coding for social contexts and levels of self-relatedness (abstract meaning as dynamic). As compared to the current approaches in the literature on abstract language that combine embodiment with some a-modal aspects, our proposal is fully embodied and rules out additional aspects. Our proposal may spur future empirical research on abstract language in the embodied approach.
Collapse
|
20
|
Abstract
Despite the frequent suggestion in the literature that Broca's area is a common link between vocal and gestural models of the origins of language, this has never been established within a single motor-production study. In the present functional MRI experiment, participants were asked to describe the spatial properties of objects (e.g. a motorcycle) using speech, pantomime, and drawing. Pairwise conjunction analyses revealed that the left inferior gyrus - in combination with the left basal ganglia and ventral anterior thalamus - was jointly activated for the production of speech and pantomime but not for the conjunctions with drawing. Drawing and pantomime instead showed strong overlap in the intraparietal sulcus and superior parietal region bilaterally. These results provide the first demonstration in a production study that Broca's area is jointly activated by speech and gesture when depicting the same semantic content.
Collapse
|
21
|
Does TMS Disruption of the Left Primary Motor Cortex Affect Verb Retrieval Following Exposure to Pantomimed Gestures? Front Neurosci 2019; 12:920. [PMID: 30618552 PMCID: PMC6299802 DOI: 10.3389/fnins.2018.00920] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 11/23/2018] [Indexed: 11/17/2022] Open
Abstract
Previous research suggests that meaning-laden gestures, even when produced in the absence of language (i.e., pantomimed gestures), influence lexical retrieval. Yet, little is known about the neural mechanisms that underlie this process. Based on embodied cognition theories, many studies have demonstrated motor cortex involvement in the representation of action verbs and in the understanding of actions. The present study aimed to investigate whether the motor system plays a critical role in the behavioral influence of pantomimed gestures on action naming. Continuous theta burst stimulation (cTBS) was applied over the hand area of the left primary motor cortex and to a control site (occipital cortex). An action-picture naming task followed cTBS. In the naming task, participants named action pictures that were preceded by videos of congruent pantomimed gestures, unrelated pantomimed gestures or a control video with no movement (as a neutral, non-gestural condition). In addition to behavioral measures of performance, cTBS-induced changes in corticospinal activity were assessed. We replicated previous finding that exposure to congruent pantomimed gestures facilitates word production, compared to unrelated or neutral primes. However, we found no evidence that the left primary motor area is crucially involved in the mechanism underlying behavioral facilitation effects of gesture on verb production. Although, at the group level, cTBS induced motor cortex suppression, at the individual level we found remarkable variability of cTBS effects on the motor cortex. We found cTBS induction of both inhibition of corticospinal activity (with slower behavioral of responses) and enhancement (with faster behavioral responses). Our findings cast doubt on assumptions that the motor cortex is causally involved in the impact of gestures on action-word processing. Our results also highlight the importance of careful consideration of interindividual variability for the interpretation of cTBS effects.
Collapse
|
22
|
Natural reference: A phylo- and ontogenetic perspective on the comprehension of iconic gestures and vocalizations. Dev Sci 2018; 22:e12757. [PMID: 30267557 DOI: 10.1111/desc.12757] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 09/20/2018] [Indexed: 11/27/2022]
Abstract
The recognition of iconic correspondence between signal and referent has been argued to bootstrap the acquisition and emergence of language. Here, we study the ontogeny, and to some extent the phylogeny, of the ability to spontaneously relate iconic signals, gestures, and/or vocalizations, to previous experience. Children at 18, 24, and 36 months of age (N = 216) and great apes (N = 13) interacted with two apparatuses, each comprising a distinct action and sound. Subsequently, an experimenter mimicked either the action, the sound, or both in combination to refer to one of the apparatuses. Experiments 1 and 2 found no spontaneous comprehension in great apes and in 18-month-old children. At 24 months of age, children were successful with a composite vocalization-gesture signal but not with either vocalization or gesture alone. At 36 months, children succeeded both with a composite vocalization-gesture signal and with gesture alone, but not with vocalization alone. In general, gestures were understood better compared to vocalizations. Experiment 4 showed that gestures were understood irrespective of how children learned about the corresponding action (through observation or self-experience). This pattern of results demonstrates that iconic signals can be a powerful way to establish reference in the absence of language, but they are not trivial for children to comprehend and not all iconic signals are created equal.
Collapse
|
23
|
High-level language processing regions are not engaged in action observation or imitation. J Neurophysiol 2018; 120:2555-2570. [PMID: 30156457 DOI: 10.1152/jn.00222.2018] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
A set of left frontal, temporal, and parietal brain regions respond robustly during language comprehension and production (e.g., Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010; Menenti L, Gierhan SM, Segaert K, Hagoort P. Psychol Sci 22: 1173-1182, 2011). These regions have been further shown to be selective for language relative to other cognitive processes, including arithmetic, aspects of executive function, and music perception (e.g., Fedorenko E, Behr MK, Kanwisher N. Proc Natl Acad Sci USA 108: 16428-16433, 2011; Monti MM, Osherson DN. Brain Res 1428: 33-42, 2012). However, one claim about overlap between language and nonlinguistic cognition remains prominent. In particular, some have argued that language processing shares computational demands with action observation and/or execution (e.g., Rizzolatti G, Arbib MA. Trends Neurosci 21: 188-194, 1998; Koechlin E, Jubault T. Neuron 50: 963-974, 2006; Tettamanti M, Weniger D. Cortex 42: 491-494, 2006). However, the evidence for these claims is indirect, based on observing activation for language and action tasks within the same broad anatomical areas (e.g., on the lateral surface of the left frontal lobe). To test whether language indeed shares machinery with action observation/execution, we examined the responses of language brain regions, defined functionally in each individual participant (Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010) to action observation ( experiments 1, 2, and 3a) and action imitation ( experiment 3b). With the exception of the language region in the angular gyrus, all language regions, including those in the inferior frontal gyrus (within "Broca's area"), showed little or no response during action observation/imitation. These results add to the growing body of literature suggesting that high-level language regions are highly selective for language processing (see Fedorenko E, Varley R. Ann NY Acad Sci 1369: 132-153, 2016 for a review). NEW & NOTEWORTHY Many have argued for overlap in the machinery used to interpret language and others' actions, either because action observation was a precursor to linguistic communication or because both require interpreting hierarchically-structured stimuli. However, existing evidence is indirect, relying on group analyses or reverse inference. We examined responses to action observation in language regions defined functionally in individual participants and found no response. Thus language comprehension and action observation recruit distinct circuits in the modern brain.
Collapse
|
24
|
Interpretation of Social Interactions: Functional Imaging of Cognitive-Semiotic Categories During Naturalistic Viewing. Front Hum Neurosci 2018; 12:296. [PMID: 30154703 PMCID: PMC6102316 DOI: 10.3389/fnhum.2018.00296] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 07/06/2018] [Indexed: 01/01/2023] Open
Abstract
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Collapse
|
25
|
|
26
|
Storytelling Is Intrinsically Mentalistic: A Functional Magnetic Resonance Imaging Study of Narrative Production across Modalities. J Cogn Neurosci 2018; 30:1298-1314. [PMID: 29916789 DOI: 10.1162/jocn_a_01294] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
People utilize multiple expressive modalities for communicating narrative ideas about past events. The three major ones are speech, pantomime, and drawing. The current study used functional magnetic resonance imaging to identify common brain areas that mediate narrative communication across these three sensorimotor mechanisms. In the scanner, participants were presented with short narrative prompts akin to newspaper headlines (e.g., "Surgeon finds scissors inside of patient"). The task was to generate a representation of the event, either by describing it verbally through speech, by pantomiming it gesturally, or by drawing it on a tablet. In a control condition designed to remove sensorimotor activations, participants described the spatial properties of individual objects (e.g., "binoculars"). Each of the three modality-specific subtractions produced similar results, with activations in key components of the mentalizing network, including the TPJ, posterior STS, and posterior cingulate cortex. Conjunction analysis revealed that these areas constitute a cross-modal "narrative hub" that transcends the three modalities of communication. The involvement of these areas in narrative production suggests that people adopt an intrinsically mentalistic and character-oriented perspective when engaging in storytelling, whether using speech, pantomime, or drawing.
Collapse
|
27
|
The effect of motor context on semantic processing: A TMS study. Neuropsychologia 2018; 114:243-250. [DOI: 10.1016/j.neuropsychologia.2018.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 03/19/2018] [Accepted: 05/02/2018] [Indexed: 11/26/2022]
|
28
|
Action-Related Speech Modulates Beta Oscillations During Observation of Tool-Use Gestures. Brain Topogr 2018; 31:838-847. [DOI: 10.1007/s10548-018-0641-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Accepted: 02/27/2018] [Indexed: 10/17/2022]
|
29
|
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network. Front Hum Neurosci 2017; 11:573. [PMID: 29249945 PMCID: PMC5714878 DOI: 10.3389/fnhum.2017.00573] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 11/13/2017] [Indexed: 11/16/2022] Open
Abstract
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Collapse
|
30
|
Motor activation during action perception depends on action interpretation. Neuropsychologia 2017; 105:84-91. [PMID: 28189494 PMCID: PMC5447367 DOI: 10.1016/j.neuropsychologia.2017.01.032] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 01/26/2017] [Accepted: 01/30/2017] [Indexed: 10/20/2022]
Abstract
Since the discovery of motor mirroring, the involvement of the motor system in action interpretation has been widely discussed. While some theories proposed that motor mirroring underlies human action understanding, others suggested that it is a corollary of action interpretation. We put these two accounts to the test by employing superficially similar actions that invite radically different interpretations of the underlying intentions. Using an action-observation task, we assessed motor activation (as indexed by the suppression of the EEG mu rhythm) in response to actions typically interpreted as instrumental (e.g., grasping) or referential (e.g., pointing) towards an object. Only the observation of instrumental actions resulted in enhanced mu suppression. In addition, the exposure to grasping actions failed to elicit mu suppression when they were preceded by speech, suggesting that the presence of communicative signals modulated the interpretation of the observed actions. These results suggest that the involvement of sensorimotor cortices during action processing is conditional on a particular (instrumental) action interpretation, and that action interpretation relies on inferential processes and top-down mechanisms that are implemented outside of the motor system.
Collapse
|
31
|
Time-Frequency Analysis of Mu Rhythm Activity during Picture and Video Action Naming Tasks. Brain Sci 2017; 7:brainsci7090114. [PMID: 28878193 PMCID: PMC5615255 DOI: 10.3390/brainsci7090114] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 08/24/2017] [Accepted: 08/30/2017] [Indexed: 11/25/2022] Open
Abstract
This study used whole-head 64 channel electroencephalography to measure changes in sensorimotor activity—as indexed by the mu rhythm—in neurologically-healthy adults, during subvocal confrontation naming tasks. Independent component analyses revealed sensorimotor mu component clusters in the right and left hemispheres. Event related spectral perturbation analyses indicated significantly stronger patterns of mu rhythm activity (pFDR < 0.05) during the video condition as compared to the picture condition, specifically in the left hemisphere. Mu activity is hypothesized to reflect typical patterns of sensorimotor activation during action verb naming tasks. These results support further investigation into sensorimotor cortical activity during action verb naming in clinical populations.
Collapse
|
32
|
Graph theoretical analysis of functional network for comprehension of sign language. Brain Res 2017; 1671:55-66. [PMID: 28690129 DOI: 10.1016/j.brainres.2017.06.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 06/29/2017] [Accepted: 06/30/2017] [Indexed: 12/14/2022]
Abstract
Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t(24)=2.379, p=0.026), small-worldness (t(24)=2.604, p=0.016) and modularity (t(24)=3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action.
Collapse
|
33
|
Modulating the assessment of semantic speech–gesture relatedness via transcranial direct current stimulation of the left frontal cortex. Brain Stimul 2017; 10:223-230. [DOI: 10.1016/j.brs.2016.10.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Revised: 10/23/2016] [Accepted: 10/24/2016] [Indexed: 11/23/2022] Open
|
34
|
Different activity patterns for action and language within their shared neural areas: An fMRI study on action observation and language phonology. Neuropsychologia 2017; 99:112-120. [PMID: 28259773 DOI: 10.1016/j.neuropsychologia.2017.02.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2016] [Revised: 02/23/2017] [Accepted: 02/28/2017] [Indexed: 11/26/2022]
Abstract
The neural processes for action and language activate shared brain regions including the left inferior frontal, parietal and temporal-occipital cortices. However, it still remains unclear how action and language are related and what neural activity patterns are elicited within these shared cortical regions. In this study we examined the neural activation for action observation and language phonology in their shared cortical regions by conducting three experiments in a single fMRI session: a mixed-task experiment involving both action and language phonological processing, and two independent experiments involving language phonology and action observation respectively. To control for differences in the visual processing and to enable a direct comparison between the tasks, the same visual stimuli were used for the mixed-tasks. Common neural areas for action observation and language phonology were located in the junction of the left inferior frontal/precentral gyrus, the left intraparietal sulcus and the left temporal-occipital cortex. Nevertheless, multi-voxel pattern analysis on the shared neural areas revealed that different patterns of neural activity were elicited for the action and language phonological tasks. Our results provide the first neuroimaging evidence that the common neural structures are engaged differently by action and language phonological processing.
Collapse
|
35
|
Neurophysiology of Grasping Actions: Evidence from ERPs. Front Psychol 2016; 7:1996. [PMID: 28066310 PMCID: PMC5177652 DOI: 10.3389/fpsyg.2016.01996] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 12/08/2016] [Indexed: 11/25/2022] Open
Abstract
We use our hands very frequently to interact with our environment. Neuropsychology together with lesion models and intracranial recordings and imaging work yielded important insights into the functional neuroanatomical correlates of grasping, one important function of our hands, pointing toward a functional parietofrontal brain network. Event-related potentials (ERPs) register directly electrical brain activity and are endowed with high temporal resolution but have long been assumed to be susceptible to movement artifacts. Recent work has shown that reliable ERPs can be obtained during movement execution. Here, we review the available ERP work on (uni) manual grasping actions. We discuss various ERP components and how they may be related to functional components of grasping according to traditional distinctions of manual actions such as planning and control phases. The ERP results are largely in line with the assumption of a parietofrontal network. But other questions remain, in particular regarding the temporal succession of frontal and parietal ERP effects. With the low number of ERP studies on grasping, not all ERP effects appear to be coherent with one another. Understanding the control of our hands may help to develop further neurocognitive theories of grasping and to make progress in prosthetics, rehabilitation or development of technical systems for support of human actions.
Collapse
|
36
|
Distinct Contributions of Dorsal and Ventral Streams to Imitation of Tool-Use and Communicative Gestures. Cereb Cortex 2016; 28:474-492. [DOI: 10.1093/cercor/bhw383] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2016] [Accepted: 11/16/2016] [Indexed: 12/12/2022] Open
|
37
|
Repeated movie viewings produce similar local activity patterns but different network configurations. Neuroimage 2016; 142:613-627. [DOI: 10.1016/j.neuroimage.2016.07.061] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 07/17/2016] [Accepted: 07/29/2016] [Indexed: 11/30/2022] Open
|
38
|
Perceived communicative intent in gesture and language modulates the superior temporal sulcus. Hum Brain Mapp 2016; 37:3444-61. [PMID: 27238550 PMCID: PMC6867447 DOI: 10.1002/hbm.23251] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Revised: 03/25/2016] [Accepted: 04/27/2016] [Indexed: 11/08/2022] Open
Abstract
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
|
39
|
Language, gesture, and handedness: Evidence for independent lateralized networks. Cortex 2016; 82:72-85. [DOI: 10.1016/j.cortex.2016.06.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Revised: 02/25/2016] [Accepted: 06/06/2016] [Indexed: 12/16/2022]
|
40
|
|
41
|
Neural basis of understanding communicative actions: Changes associated with knowing the actor's intention and the meanings of the actions. Neuropsychologia 2016; 81:230-237. [PMID: 26752450 PMCID: PMC4749541 DOI: 10.1016/j.neuropsychologia.2016.01.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Revised: 12/14/2015] [Accepted: 01/01/2016] [Indexed: 11/16/2022]
Abstract
People can communicate by using hand actions, e.g., signs. Understanding communicative actions requires that the observer knows that the actor has an intention to communicate and the meanings of the actions. Here, we investigated how this prior knowledge affects processing of observed actions. We used functional MRI to determine changes in action processing when non-signers were told that the observed actions are communicative (i.e., signs) and learned the meanings of half of the actions. Processing of hand actions activated the left and right inferior frontal gyrus (IFG, BA 44 and 45) when the communicative intention of the actor was known, even when the meanings of the actions remained unknown. These regions were not active when the observers did not know about the communicative nature of the hand actions. These findings suggest that the left and right IFG play a role in understanding the intention of the actor, but do not process visuospatial features of the communicative actions. Knowing the meanings of the hand actions further enhanced activity in the anterior part of the IFG (BA 45), the inferior parietal lobule and posterior inferior and middle temporal gyri in the left hemisphere. These left-hemisphere language regions could provide a link between meanings and observed actions. In sum, the findings provide evidence for the segregation of the networks involved in the neural processing of visuospatial features of communicative hand actions and those involved in understanding the actor’s intention and the meanings of the actions. Participants observed hand actions before and after learning that they are signs. Learning-induced changes in brain activity measured using fMRI. No activity in mirror neuron system when actions were not known to be communicative. Knowing the actor’s intention to communicate activated IFG and IPL. Knowing meanings of the actions increased activity in left IFG (BA 45), IPL and MTG.
Collapse
|
42
|
The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies. Neurosci Biobehav Rev 2015; 57:88-104. [DOI: 10.1016/j.neubiorev.2015.08.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Revised: 07/13/2015] [Accepted: 08/06/2015] [Indexed: 11/18/2022]
|
43
|
Gesture and word analysis: the same or different processes? Neuroimage 2015; 117:375-85. [DOI: 10.1016/j.neuroimage.2015.05.080] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 04/22/2015] [Accepted: 05/27/2015] [Indexed: 11/25/2022] Open
|
44
|
Semantic brain areas are involved in gesture comprehension: An electrical neuroimaging study. BRAIN AND LANGUAGE 2015; 147:30-40. [PMID: 26011745 DOI: 10.1016/j.bandl.2015.05.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Revised: 04/13/2015] [Accepted: 05/02/2015] [Indexed: 06/04/2023]
Abstract
While the mechanism of sign language comprehension in deaf people has been widely investigated, little is known about the neural underpinnings of spontaneous gesture comprehension in healthy speakers. Bioelectrical responses to 800 pictures of actors showing common Italian gestures (e.g., emblems, deictic or iconic gestures) were recorded in 14 persons. Stimuli were selected from a wider corpus of 1122 gestures. Half of the pictures were preceded by an incongruent description. ERPs were recorded from 128 sites while participants decided whether the stimulus was congruent. Congruent pictures elicited a posterior P300 followed by late positivity, while incongruent gestures elicited an anterior N400 response. N400 generators were investigated with swLORETA reconstruction. Processing of congruent gestures activated face- and body-related visual areas (e.g., BA19, BA37, BA22), the left angular gyrus, mirror fronto/parietal areas. The incongruent-congruent contrast particularly stimulated linguistic and semantic brain areas, such as the left medial and the superior temporal lobe.
Collapse
|
45
|
Modulation of Gestural-verbal Semantic Integration by tDCS. Brain Stimul 2015; 8:493-8. [DOI: 10.1016/j.brs.2014.12.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Revised: 09/28/2014] [Accepted: 12/04/2014] [Indexed: 11/20/2022] Open
|
46
|
Forelimb preferences in human beings and other species: multiple models for testing hypotheses on lateralization. Front Psychol 2015; 6:233. [PMID: 25798121 PMCID: PMC4351643 DOI: 10.3389/fpsyg.2015.00233] [Citation(s) in RCA: 87] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 02/15/2015] [Indexed: 12/16/2022] Open
Abstract
Functional preferences in the use of right/left forelimbs are not exclusively present in humans but have been widely documented in a variety of vertebrate and invertebrate species. A matter of debate is whether non-human species exhibit a degree and consistency of functional forelimb asymmetries comparable to human handedness. The comparison is made difficult by the variability in hand use in humans and the few comparable studies conducted on other species. In spite of this, interesting continuities appear in functions such as feeding, object manipulation and communicative gestures. Studies on invertebrates show how widespread forelimb preferences are among animals, and the importance of experience for the development of forelimb asymmetries. Vertebrate species have been extensively investigated to clarify the origins of forelimb functional asymmetries: comparative evidence shows that selective pressures for different functions have likely driven the evolution of human handedness. Evidence of a complex genetic architecture of human handedness is in line with the idea of multiple evolutionary origins of this trait.
Collapse
|
47
|
Representation of visual symbols in the visual word processing network. Neuropsychologia 2015; 69:232-41. [DOI: 10.1016/j.neuropsychologia.2015.01.045] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Revised: 01/13/2015] [Accepted: 01/30/2015] [Indexed: 11/26/2022]
|
48
|
|
49
|
Interaction Between Words and Symbolic Gestures as Revealed By N400. Brain Topogr 2014; 28:591-605. [DOI: 10.1007/s10548-014-0392-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 08/08/2014] [Indexed: 11/25/2022]
|
50
|
Options to enhance recovery from aphasia by means of non-invasive brain stimulation and action observation therapy. Expert Rev Neurother 2013; 14:75-91. [PMID: 24308276 DOI: 10.1586/14737175.2014.864555] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Aphasia is a highly disabling language disorder usually caused by a left-lateralized brain damage. Even if traditional linguistic-based therapies have been proved to induce an adequate clinical improvement, a large percentage of patients are left with some degree of language impairments. Therefore, new approaches to common speech therapies are urgently needed in order to maximize the recovery from aphasia. The recent application of non-invasive neurostimulation techniques to language rehabilitation has already provided promising results particularly for the recovery of word-retrieval deficits in chronic stroke aphasic patients. Positive outcomes also come from action observation therapy. Indeed, some very recent studies have shown that the observation and/or execution of gestures positively influences language recovery especially for words related to human actions. This article gives an overview of the most important results achieved using these two approaches and discusses how the application of these treatments might potentiate aphasia recovery.
Collapse
|