1
|
Siestrup S, Schubotz RI. Minor Changes Change Memories: Functional Magnetic Resonance Imaging and Behavioral Reflections of Episodic Prediction Errors. J Cogn Neurosci 2023; 35:1823-1845. [PMID: 37677059 DOI: 10.1162/jocn_a_02047] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Episodic memories can be modified, a process that is potentially driven by mnemonic prediction errors. In the present study, we used modified cues to induce prediction errors of different episodic relevance. Participants encoded episodes in the form of short toy stories and then returned for an fMRI session on the subsequent day. Here, participants were presented either original episodes or slightly modified versions thereof. Modifications consisted of replacing a single object within the episode and either challenged the gist of an episode (gist modifications) or left it intact (surface modifications). On the next day, participants completed a post-fMRI memory test that probed memories for originally encoded episodes. Both types of modifications triggered brain activation in regions we previously found to be involved in the processing of content-based mnemonic prediction errors (i.e., the exchange of an object). Specifically, these were ventrolateral pFC, intraparietal cortex, and lateral occipitotemporal cortex. In addition, gist modifications triggered pronounced brain responses, whereas those for surface modification were only significant in the right inferior frontal sulcus. Processing of gist modifications also involved the posterior temporal cortex and the precuneus. Interestingly, our findings confirmed the posterior hippocampal role of detail processing in episodic memory, as evidenced by increased posterior hippocampal activity for surface modifications compared with gist modifications. In the post-fMRI memory test, previous experience with surface modified, but not gist-modified episodes, increased erroneous acceptance of the same modified versions as originally encoded. Whereas surface-level prediction errors might increase uncertainty and facilitate confusion of alternative episode representations, gist-level prediction errors seem to trigger the clear distinction of independent episodes.
Collapse
Affiliation(s)
- Sophie Siestrup
- University of Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Germany
| | - Ricarda I Schubotz
- University of Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Germany
| |
Collapse
|
2
|
D'Argenio G, Finisguerra A, Urgesi C. Experience-dependent reshaping of body gender perception. PSYCHOLOGICAL RESEARCH 2022; 86:1184-1202. [PMID: 34387745 PMCID: PMC9090903 DOI: 10.1007/s00426-021-01569-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 07/25/2021] [Indexed: 11/29/2022]
Abstract
Protracted exposure to specific stimuli causes biased visual aftereffects at both low- and high-level dimensions of a stimulus. Recently, it has been proposed that alterations of these aftereffects could play a role in body misperceptions. However, since previous studies have mainly addressed manipulations of body size, the relative contribution of low-level retinotopic and/or high-level object-based mechanisms is yet to be understood. In three experiments, we investigated visual aftereffects for body-gender perception, testing for the tuning of visual aftereffects across different characters and orientation. We found that exposure to a distinctively female (or male) body makes androgynous bodies appear as more masculine (or feminine) and that these aftereffects were not specific for the individual characteristics of the adapting body (Exp.1). Furthermore, exposure to only upright bodies (Exp.2) biased the perception of upright, but not of inverted bodies, while exposure to both upright and inverted bodies (Exp.3) biased perception for both. Finally, participants' sensitivity to body aftereffects was lower in individuals with greater communication deficits and deeper internalization of a male gender role. Overall, our data reveals the orientation-, but not identity-tuning of body-gender aftereffects and points to the association between alterations of the malleability of body gender perception and social deficits.
Collapse
Affiliation(s)
- Giulia D'Argenio
- Department of Life Sciences, University of Trieste, Trieste, Italy. giulia.d'
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, via Margreth, 3, 33100, Udine, Italy. giulia.d'
| | | | - Cosimo Urgesi
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, via Margreth, 3, 33100, Udine, Italy.
- Scientific Institute, IRCCS E. Medea, Pasian di Prato, Udine, Italy.
| |
Collapse
|
3
|
Siestrup S, Jainta B, El-Sourani N, Trempler I, Wurm MF, Wolf OT, Cheng S, Schubotz RI. What Happened When? Cerebral Processing of Modified Structure and Content in Episodic Cueing. J Cogn Neurosci 2022; 34:1287-1305. [PMID: 35552744 DOI: 10.1162/jocn_a_01862] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Episodic memories are not static but can change on the basis of new experiences, potentially allowing us to make valid predictions in the face of an ever-changing environment. Recent research has identified prediction errors during memory retrieval as a possible trigger for such changes. In this study, we used modified episodic cues to investigate whether different types of mnemonic prediction errors modulate brain activity and subsequent memory performance. Participants encoded episodes that consisted of short toy stories. During a subsequent fMRI session, participants were presented videos showing the original episodes, or slightly modified versions thereof. In modified videos, either the order of two subsequent action steps was changed or an object was exchanged for another. Content modifications recruited parietal, temporo-occipital, and parahippocampal areas reflecting the processing of the new object information. In contrast, structure modifications elicited activation in right dorsal premotor, posterior temporal, and parietal areas, reflecting the processing of new sequence information. In a post-fMRI memory test, the participants' tendency to accept modified episodes as originally encoded increased significantly when they had been presented modified versions already during the fMRI session. After experiencing modifications, especially those of the episodes' structure, the recognition of originally encoded episodes was impaired as well. Our study sheds light onto the neural processing of different types of episodic prediction errors and their influence on subsequent memory recall.
Collapse
|
4
|
Casiraghi L, Alahmadi AAS, Monteverdi A, Palesi F, Castellazzi G, Savini G, Friston K, Gandini Wheeler-Kingshott CAM, D'Angelo E. I See Your Effort: Force-Related BOLD Effects in an Extended Action Execution-Observation Network Involving the Cerebellum. Cereb Cortex 2020; 29:1351-1368. [PMID: 30615116 PMCID: PMC6373696 DOI: 10.1093/cercor/bhy322] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 11/28/2018] [Indexed: 12/11/2022] Open
Abstract
Action observation (AO) is crucial for motor planning, imitation learning, and social interaction, but it is not clear whether and how an action execution–observation network (AEON) processes the effort of others engaged in performing actions. In this functional magnetic resonance imaging (fMRI) study, we used a “squeeze ball” task involving different grip forces to investigate whether AEON activation showed similar patterns when executing the task or observing others performing it. Both in action execution, AE (subjects performed the visuomotor task) and action observation, AO (subjects watched a video of the task being performed by someone else), the fMRI signal was detected in cerebral and cerebellar regions. These responses showed various relationships with force mapping onto specific areas of the sensorimotor and cognitive systems. Conjunction analysis of AE and AO was repeated for the “0th” order and linear and nonlinear responses, and revealed multiple AEON nodes remapping the detection of actions, and also effort, of another person onto the observer’s own cerebrocerebellar system. This result implies that the AEON exploits the cerebellum, which is known to process sensorimotor predictions and simulations, performing an internal assessment of forces and integrating information into high-level schemes, providing a crucial substrate for action imitation.
Collapse
Affiliation(s)
- Letizia Casiraghi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy.,Brain Connectivity Center, IRCCS Mondino Foundation, Pavia, Italy
| | - Adnan A S Alahmadi
- Diagnostic Radiography Technology Department, Faculty of Applied Medical Science, King Abdulaziz University (KAU), Jeddah 80200-21589, Saudi Arabia.,NMR Research Unit, Queen Square Multiple Sclerosis (MS) Centre, Department of Neuroinflammation, Institute of Neurology, University College London (UCL), London, UK
| | - Anita Monteverdi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Fulvia Palesi
- Brain MRI 3T Center, Neuroradiology Unit, IRCCS Mondino Foundation, Pavia, PV, Italy
| | - Gloria Castellazzi
- NMR Research Unit, Queen Square Multiple Sclerosis (MS) Centre, Department of Neuroinflammation, Institute of Neurology, University College London (UCL), London, UK.,Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Giovanni Savini
- Brain Connectivity Center, IRCCS Mondino Foundation, Pavia, Italy.,Department of Physics, University of Milan, Milan, Italy
| | - Karl Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London (UCL), London, UK
| | - Claudia A M Gandini Wheeler-Kingshott
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy.,NMR Research Unit, Queen Square Multiple Sclerosis (MS) Centre, Department of Neuroinflammation, Institute of Neurology, University College London (UCL), London, UK.,Brain MRI 3T Mondino Research Center, IRCCS Mondino Foundation, Pavia, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy.,Brain Connectivity Center, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
5
|
El-Sourani N, Trempler I, Wurm MF, Fink GR, Schubotz RI. Predictive Impact of Contextual Objects during Action Observation: Evidence from Functional Magnetic Resonance Imaging. J Cogn Neurosci 2020; 32:326-337. [DOI: 10.1162/jocn_a_01480] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The processing of congruent stimuli, such as an object or action in its typical location, is usually associated with reduced neural activity, probably due to facilitated recognition. However, in some situations, congruency increases neural activity—for example, when objects next to observed actions are likely versus unlikely to be involved in forthcoming action steps. Here, we investigated using fMRI whether the processing of contextual cues during action perception is driven by their (in)congruency and, thus, informative value to make sense of an observed scene. Specifically, we tested whether both highly congruent contextual objects (COs), which strongly indicate a future action step, and highly incongruent COs, which require updating predictions about possible forthcoming action steps, provide more anticipatory information about the action course than moderately congruent COs. In line with our hypothesis that especially the inferior frontal gyrus (IFG) subserves the integration of the additional information into the predictive model of the action, we found highly congruent and incongruent COs to increase bilateral activity in action observation nodes, that is, the IFG, the occipitotemporal cortex, and the intraparietal sulcus. Intriguingly, BA 47 was significantly stronger engaged for incongruent COs reflecting the updating of prediction in response to conflicting information. Our findings imply that the IFG reflects the informative impact of COs on observed actions by using contextual information to supply and update the currently operating predictive model. In the case of an incongruent CO, this model has to be reconsidered and extended toward a new overarching action goal.
Collapse
Affiliation(s)
- Nadiya El-Sourani
- Westfälische Wilhelms-Universität, Münster
- University Hospital Cologne and University of Cologne
| | | | | | - Gereon R. Fink
- University Hospital Cologne and University of Cologne
- Institute of Neuroscience and Medicine (INM3: Cognitive Neuroscience), Research Centre Jülich
| | - Ricarda I. Schubotz
- Westfälische Wilhelms-Universität, Münster
- University Hospital Cologne and University of Cologne
| |
Collapse
|
6
|
Okamoto Y, Kitada R, Miyahara M, Kochiyama T, Naruse H, Sadato N, Okazawa H, Kosaka H. Altered perspective-dependent brain activation while viewing hands and associated imitation difficulties in individuals with autism spectrum disorder. Neuroimage Clin 2018; 19:384-395. [PMID: 30035023 PMCID: PMC6051493 DOI: 10.1016/j.nicl.2018.04.030] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 04/09/2018] [Accepted: 04/23/2018] [Indexed: 12/30/2022]
Abstract
Background Individuals with autism spectrum disorder (ASD) appear to have a unique awareness of their own body, which may be associated with difficulties of gestural interaction. In typically developing (TD) individuals, the perception of body parts is processed in various brain regions. For instance, activation of the lateral occipito-temporal cortex (LOTC) is known to depend on perspective (i.e., first- or third-person perspective) and identity (i.e., own vs. another person's body). In the present study, we examined how perspective and identity affect brain activation in individuals with ASD, and how perspective- and identity-dependent brain activation is associated with gestural imitation abilities. Methods Eighteen young adults with ASD and 18 TD individuals participated in an fMRI study in which the participants observed their own or another person's hands from the first- and third-person perspectives. We examined whether the brain activation associated with perspective and identity was altered in individuals with ASD. Furthermore, we identified the brain regions the activity of which correlated with gestural imitation difficulties in individuals with ASD. Results In the TD group, the left LOTC was more strongly activated by viewing a hand from the third-person perspective compared with the first-person perspective. This perspective effect in the left LOTC was significantly attenuated in the ASD group. We also observed significant group differences in the perspective effect in the medial prefrontal cortex (mPFC). Correlation analysis revealed that the perspective effect in the inferior parietal lobule (IPL) and cerebellum was associated with the gestural imitation ability in individuals with ASD. Conclusions Our study suggests that atypical visual self-body recognition in individuals with ASD is associated with an altered perspective effect in the LOTC and mPFC, which are thought to be involved in the physical and core selves, respectively. Furthermore, the gestural imitation difficulty in individuals with ASD might be associated with the altered activation in the IPL and cerebellum, but not in the LOTC. These findings shed light on common and divergent neural mechanisms underlying atypical visual self-body awareness and gestural interaction in ASD.
Collapse
Key Words
- ACC, anterior cingulate cortex
- AQ, autism spectrum quotient
- ASD, autism spectrum disorder
- Autism spectrum disorder
- CMS, cortical midline structure
- Cerebellum
- DISCO, diagnostic Interview for Social and communication Disorders
- EBA, extrastriate body area
- FISQ, full-scale intelligence quotient
- Functional magnetic resonance imaging
- IOG, inferior occipital gyrus
- IPL, inferior parietal lobule
- IQ, intelligence quotient
- Imitation
- Inferior parietal lobule
- LOTC, lateral occipito-temporal cortex
- Lateral occipito-temporal cortex
- MFG, middle frontal gyrus
- MNS, mirror neuron system
- MOG, middle occipital gyrus
- SRS, social responsiveness scale
- TD, typically developing
- ULS, upper limb sensitive
- Visual self-body recognition
- mPFC, medial prefrontal cortex
Collapse
Affiliation(s)
- Yuko Okamoto
- ATR-Promotions, Brain Activity Imaging Center, Kyoto, Japan; Research Center for Child Mental Development, University of Fukui, Fukui, Japan.
| | - Ryo Kitada
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore.
| | - Motohide Miyahara
- School of Physical Education, Sport and Exercise Sciences, University of Otago, Dunedin, New Zealand.
| | - Takanori Kochiyama
- Research Center for Child Mental Development, University of Fukui, Fukui, Japan.
| | - Hiroaki Naruse
- Division of Physical Therapy and Rehabilitation, University of Fukui Hospital, Fukui, Japan.
| | - Norihiro Sadato
- Department of Cerebral Research, Division of Cerebral Integration, National Institute for Physiological Sciences, Aichi, Japan; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies, Kanagawa, Japan.
| | - Hidehiko Okazawa
- Biomedical Imaging Research Center, University of Fukui, Fukui, Japan.
| | - Hirotaka Kosaka
- Research Center for Child Mental Development, University of Fukui, Fukui, Japan; Department of Neuropsychiatry, University of Fukui, Fukui, Japan.
| |
Collapse
|
7
|
El-Sourani N, Wurm MF, Trempler I, Fink GR, Schubotz RI. Making sense of objects lying around: How contextual objects shape brain activity during action observation. Neuroimage 2017; 167:429-437. [PMID: 29175612 DOI: 10.1016/j.neuroimage.2017.11.047] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 11/15/2017] [Accepted: 11/21/2017] [Indexed: 01/18/2023] Open
Abstract
Action recognition involves not only the readout of body movements and involved objects but also the integration of contextual information, e.g. the environment in which an action takes place. Notably, inferring superordinate goals and generating predictions about forthcoming action steps should benefit from screening the actor's immediate environment, in particular objects located in the actor's peripersonal space and thus potentially used in following action steps. Critically, if such contextual objects (COs) afford actions that are semantically related to the observed action, they may trigger or facilitate the inference of goals and the prediction of following actions. This fMRI study investigated the neural mechanisms underlying the integration of COs in semantic and spatial relation to observed actions. Specifically, we tested the hypothesis that the inferior frontal gyrus (IFG) subserves this integration. Participants observed action videos in which COs and observed actions had common overarching goals or not (goal affinity) and varied in their location relative to the actor. High goal affinity increased bilateral activity in action observation network nodes, i.e. the occipitotemporal cortex and the intraparietal sulcus, but also in the precuneus and middle frontal gyri. This finding suggests that the semantic relation between COs and actions is considered during action observation and triggers (rather than facilitates) processes beyond those usually involved in action observation. Moreover, COs with high goal affinity located close to the actor's dominant hand additionally engaged bilateral IFG, corroborating the view that IFG is critically involved in the integration of action steps under a common overarching goal.
Collapse
Affiliation(s)
- Nadiya El-Sourani
- Department of Psychology, Westfälische Wilhelms-Universität, 48149 Münster, Germany; Institute of Neuroscience and Medicine (INM3), Cognitive Neuroscience, Research Centre Jülich, 52425 Jülich, Germany.
| | - Moritz F Wurm
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Ima Trempler
- Department of Psychology, Westfälische Wilhelms-Universität, 48149 Münster, Germany
| | - Gereon R Fink
- Institute of Neuroscience and Medicine (INM3), Cognitive Neuroscience, Research Centre Jülich, 52425 Jülich, Germany; Department of Neurology, University Hospital Cologne, 50937 Cologne, Germany
| | - Ricarda I Schubotz
- Department of Psychology, Westfälische Wilhelms-Universität, 48149 Münster, Germany; Institute of Neuroscience and Medicine (INM3), Cognitive Neuroscience, Research Centre Jülich, 52425 Jülich, Germany
| |
Collapse
|
8
|
Dance expertise modulates visual sensitivity to complex biological movements. Neuropsychologia 2017; 104:168-181. [DOI: 10.1016/j.neuropsychologia.2017.08.019] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 08/07/2017] [Accepted: 08/13/2017] [Indexed: 11/27/2022]
|
9
|
Okamoto Y, Kosaka H, Kitada R, Seki A, Tanabe HC, Hayashi MJ, Kochiyama T, Saito DN, Yanaka HT, Munesue T, Ishitobi M, Omori M, Wada Y, Okazawa H, Koeda T, Sadato N. Age-dependent atypicalities in body- and face-sensitive activation of the EBA and FFA in individuals with ASD. Neurosci Res 2017; 119:38-52. [DOI: 10.1016/j.neures.2017.02.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Revised: 01/28/2017] [Accepted: 02/02/2017] [Indexed: 01/21/2023]
|
10
|
Baldassano C, Beck DM, Fei-Fei L. Human-Object Interactions Are More than the Sum of Their Parts. Cereb Cortex 2017; 27:2276-2288. [PMID: 27073216 DOI: 10.1093/cercor/bhw077] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.
Collapse
Affiliation(s)
| | - Diane M Beck
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Li Fei-Fei
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
11
|
Greene MR, Baldassano C, Esteva A, Beck DM, Fei-Fei L. Visual scenes are categorized by function. J Exp Psychol Gen 2016; 145:82-94. [PMID: 26709590 DOI: 10.1037/xge0000129] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function.
Collapse
Affiliation(s)
| | | | - Andre Esteva
- Department of Electrical Engineering, Stanford University
| | - Diane M Beck
- Department of Psychology, University of Illinois at Urbana-Champaign
| | - Li Fei-Fei
- Department of Computer Science, Stanford University
| |
Collapse
|
12
|
Abstract
Processing emotional body expressions has become recently an important topic in affective and social neuroscience along with the investigation of facial expressions. The objective of the study is to review the literature on emotional body expressions in order to discuss the current state of knowledge on this topic and identify directions for future research. The following electronic databases were searched: PsychINFO, Ebsco, ERIC, ProQuest, Sagepub, and SCOPUS using terms such as "body," "bodily expression," "body perception," "emotions," "posture," "body recognition" and combinations of them. The synthesis revealed several research questions that were addressed in neuroimaging, electrophysiological and behavioral studies. Among them, one important question targeted the neural mechanisms of emotional processing of body expressions to specific subsections regarding the time course for the integration of emotional signals from face and body, as well as the role of context in the perception of emotional signals. Processing bodily expression of emotion is similar to processing facial expressions, and the holistic processing is extended to the whole person. The current state-of-the-art in processing emotional body expressions may lead to a better understanding of the underlying neural mechanisms of social behavior. At the end of the review, suggestions for future research directions are presented.
Collapse
Affiliation(s)
- Violeta Enea
- a Department of Psychology, Faculty of Psychology and Education Sciences , "Alexandru Ioan Cuza" University of Iași , Iași , România
| | - Sorina Iancu
- a Department of Psychology, Faculty of Psychology and Education Sciences , "Alexandru Ioan Cuza" University of Iași , Iași , România
| |
Collapse
|
13
|
Wurm MF, Ariani G, Greenlee MW, Lingnau A. Decoding Concrete and Abstract Action Representations During Explicit and Implicit Conceptual Processing. Cereb Cortex 2015. [DOI: 10.1093/cercor/bhv169] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
|
14
|
Quadflieg S, Gentile F, Rossion B. The neural basis of perceiving person interactions. Cortex 2015; 70:5-20. [PMID: 25697049 DOI: 10.1016/j.cortex.2014.12.020] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 12/25/2014] [Accepted: 12/30/2014] [Indexed: 11/26/2022]
Abstract
This study examined whether the grouping of people into meaningful social scenes (e.g., two people having a chat) impacts the basic perceptual analysis of each partaking individual. To explore this issue, we measured neural activity using functional magnetic resonance imaging (fMRI) while participants sex-categorized congruent as well as incongruent person dyads (i.e., two people interacting in a plausible or implausible manner). Incongruent person dyads elicited enhanced neural processing in several high-level visual areas dedicated to face and body encoding and in the posterior middle temporal gyrus compared to congruent person dyads. Incongruent and congruent person scenes were also successfully differentiated by a linear multivariate pattern classifier in the right fusiform body area and the left extrastriate body area. Finally, increases in the person scenes' meaningfulness as judged by independent observers was accompanied by enhanced activity in the bilateral posterior insula. These findings demonstrate that the processing of person scenes goes beyond a mere stimulus-bound encoding of their partaking agents, suggesting that changes in relations between agents affect their representation in category-selective regions of the visual cortex and beyond.
Collapse
Affiliation(s)
- Susanne Quadflieg
- School of Experimental Psychology, University of Bristol, UK; Division of Psychology, New York University Abu Dhabi, UAE.
| | - Francesco Gentile
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium; Department of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Bruno Rossion
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
15
|
Alia-Klein N, Wang GJ, Preston-Campbell RN, Moeller SJ, Parvaz MA, Zhu W, Jayne MC, Wong C, Tomasi D, Goldstein RZ, Fowler JS, Volkow ND. Reactions to media violence: it's in the brain of the beholder. PLoS One 2014; 9:e107260. [PMID: 25208327 PMCID: PMC4160225 DOI: 10.1371/journal.pone.0107260] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 08/07/2014] [Indexed: 11/18/2022] Open
Abstract
Media portraying violence is part of daily exposures. The extent to which violent media exposure impacts brain and behavior has been debated. Yet there is not enough experimental data to inform this debate. We hypothesize that reaction to violent media is critically dependent on personality/trait differences between viewers, where those with the propensity for physical assault will respond to the media differently than controls. The source of the variability, we further hypothesize, is reflected in autonomic response and brain functioning that differentiate those with aggression tendencies from others. To test this hypothesis we pre-selected a group of aggressive individuals and non-aggressive controls from the normal healthy population; we documented brain, blood-pressure, and behavioral responses during resting baseline and while the groups were watching media violence and emotional media that did not portray violence. Positron Emission Tomography was used with [18F]fluoro-deoxyglucose (FDG) to image brain metabolic activity, a marker of brain function, during rest and during film viewing while blood-pressure and mood ratings were intermittently collected. Results pointed to robust resting baseline differences between groups. Aggressive individuals had lower relative glucose metabolism in the medial orbitofrontal cortex correlating with poor self-control and greater glucose metabolism in other regions of the default-mode network (DMN) where precuneus correlated with negative emotionality. These brain results were similar while watching the violent media, during which aggressive viewers reported being more Inspired and Determined and less Upset and Nervous, and also showed a progressive decline in systolic blood-pressure compared to controls. Furthermore, the blood-pressure and brain activation in orbitofrontal cortex and precuneus were differentially coupled between the groups. These results demonstrate that individual differences in trait aggression strongly couple with brain, behavioral, and autonomic reactivity to media violence which should factor into debates about the impact of media violence on the public.
Collapse
Affiliation(s)
- Nelly Alia-Klein
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- * E-mail:
| | - Gene-Jack Wang
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Laboratory of Neuroimaging, National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, United States of America
| | - Rebecca N. Preston-Campbell
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Scott J. Moeller
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Muhammad A. Parvaz
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Wei Zhu
- Applied Mathematics and Statistics, SUNY, Stony Brook, New York, United States of America
| | - Millard C. Jayne
- Laboratory of Neuroimaging, National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, United States of America
| | - Chris Wong
- Laboratory of Neuroimaging, National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, United States of America
| | - Dardo Tomasi
- Laboratory of Neuroimaging, National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, United States of America
| | - Rita Z. Goldstein
- Department of Psychiatry, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Joanna S. Fowler
- Medical Department, Brookhaven National Laboratory, Upton, New York, United States of America
| | - Nora D. Volkow
- Laboratory of Neuroimaging, National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, United States of America
| |
Collapse
|
16
|
Wold A, Limanowski J, Walter H, Blankenburg F. Proprioceptive drift in the rubber hand illusion is intensified following 1 Hz TMS of the left EBA. Front Hum Neurosci 2014; 8:390. [PMID: 24926247 PMCID: PMC4045244 DOI: 10.3389/fnhum.2014.00390] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 05/16/2014] [Indexed: 01/25/2023] Open
Abstract
The rubber hand illusion (RHI) is a paradigm used to induce an illusory feeling of owning a dummy hand through congruent multisensory stimulation. Thus, it can grant insights into how our brain represents our body as our own. Recent research has demonstrated an involvement of the extrastriate body area (EBA), an area of the brain that is typically implicated in the perception of non-face body parts, in illusory body ownership. In this experiment, we sought causal evidence for the involvement of the EBA in the RHI. Sixteen participants took part in a sham controlled, 1 Hz repetitive transcranial magnetic stimulation (rTMS) experiment. Participants received (RHI condition) or asynchronous (control) stroking and were asked to report the perceived location of their real hand, as well as the intensity and the temporal onset of experienced ownership of the dummy hand. Following rTMS of the left EBA, participants misjudged their real hand’s location significantly more toward the dummy hand during the RHI than after sham stimulation. This difference in “proprioceptive drift” provides the first causal evidence that the EBA is involved in the RHI and subsequently in body representation and further supports the view that the EBA is necessary for multimodal integration.
Collapse
Affiliation(s)
- Andrew Wold
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany ; Neurocomputation and Neuroimaging Unit, Freie Universität Berlin Berlin, Germany ; Division of Mind and Brain Research, Charité University of Medicine Berlin, Germany
| | - Jakub Limanowski
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany ; Neurocomputation and Neuroimaging Unit, Freie Universität Berlin Berlin, Germany
| | - Henrik Walter
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany ; Division of Mind and Brain Research, Charité University of Medicine Berlin, Germany
| | - Felix Blankenburg
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany ; Neurocomputation and Neuroimaging Unit, Freie Universität Berlin Berlin, Germany
| |
Collapse
|
17
|
Watson CE, Cardillo ER, Bromberger B, Chatterjee A. The specificity of action knowledge in sensory and motor systems. Front Psychol 2014; 5:494. [PMID: 24904506 PMCID: PMC4033265 DOI: 10.3389/fpsyg.2014.00494] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 05/06/2014] [Indexed: 11/24/2022] Open
Abstract
Neuroimaging studies have found that sensorimotor systems are engaged when participants observe actions or comprehend action language. However, most of these studies have asked the binary question of whether action concepts are embodied or not, rather than whether sensory and motor areas of the brain contain graded amounts of information during putative action simulations. To address this question, we used repetition suppression (RS) functional magnetic resonance imaging to determine if functionally-localized motor movement and visual motion regions-of-interest (ROI) and two anatomical ROIs (inferior frontal gyrus, IFG; left posterior middle temporal gyrus, pMTG) were sensitive to changes in the exemplar (e.g., two different people "kicking") or representational format (e.g., photograph or schematic drawing of someone "kicking") within pairs of action images. We also investigated whether concrete versus more symbolic depictions of actions (i.e., photographs or schematic drawings) yielded different patterns of activation throughout the brain. We found that during a conceptual task, sensory and motor systems represent actions at different levels of specificity. While the visual motion ROI did not exhibit RS to different exemplars of the same action or to the same action depicted by different formats, the motor movement ROI did. These effects are consistent with "person-specific" action simulations: if the motor system is recruited for action understanding, it does so by activating one's own motor program for an action. We also observed significant repetition enhancement within the IFG ROI to different exemplars or formats of the same action, a result that may indicate additional cognitive processing on these trials. Finally, we found that the recruitment of posterior brain regions by action concepts depends on the format of the input: left lateral occipital cortex and right supramarginal gyrus responded more strongly to symbolic depictions of actions than concrete ones.
Collapse
Affiliation(s)
- Christine E. Watson
- Moss Rehabilitation Research Institute, Einstein Healthcare NetworkElkins Park, PA, USA
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Eileen R. Cardillo
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Bianca Bromberger
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| | - Anjan Chatterjee
- Department of Neurology and Center for Cognitive Neuroscience, University of PennsylvaniaPhiladelphia, PA, USA
| |
Collapse
|
18
|
Hafri A, Papafragou A, Trueswell JC. Getting the gist of events: recognition of two-participant actions from brief displays. J Exp Psychol Gen 2013; 142:880-905. [PMID: 22984951 PMCID: PMC3657301 DOI: 10.1037/a0030045] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In 3 experiments, we displayed naturalistic photographs of a wide range of 2-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist; e.g., "kicking," "pushing") and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in 2 sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with 2-s durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition, but with sufficient input from multiple fixations, people categorically determine the relationship between event participants.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104-6241, USA.
| | | | | |
Collapse
|
19
|
Debruille JB, Brodeur MB, Franco Porras C. N300 and social affordances: a study with a real person and a dummy as stimuli. PLoS One 2012; 7:e47922. [PMID: 23118908 PMCID: PMC3485319 DOI: 10.1371/journal.pone.0047922] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2011] [Accepted: 09/21/2012] [Indexed: 11/25/2022] Open
Abstract
Pictures of objects have been shown to automatically activate affordances, that is, actions that could be performed with the object. Similarly, pictures of faces are likely to activate social affordances, that is, interactions that would be possible with the person whose face is being presented. Most interestingly, if it is the face of a real person that is shown, one particular type of social interactions can even be carried out while event-related potentials (ERPs) are recorded. Indeed, subtle eye movements can be made to achieve an eye contact with the person with minimal artefacts on the EEG. The present study thus used the face of a real person to explore the electrophysiological correlates of affordances in a situation where some of them (i.e., eye contacts) are actually performed. The ERPs this person elicited were compared to those evoked by another 3D stimulus: a real dummy, and thus by a stimulus that should also automatically activate eye contact affordances but with which such affordances could then be inhibited since they cannot be carried out with an object. The photos of the person and of the dummy were used as matching stimuli that should not activate social affordances as strongly as the two 3D stimuli and for which social affordances cannot be carried out. The fronto-central N300s to the real dummy were found of greater amplitudes than those to the photos and to the real person. We propose that these greater N300s index the greater inhibition needed after the stronger activations of affordances induced by this 3D stimulus than by the photos. Such an inhibition would not have occurred in the case of the real person because eye contacts were carried out.
Collapse
Affiliation(s)
- J Bruno Debruille
- Douglas Mental Health University Institute, Montréal, Québec, Canada.
| | | | | |
Collapse
|
20
|
Herrington J, Nymberg C, Faja S, Price E, Schultz R. The responsiveness of biological motion processing areas to selective attention towards goals. Neuroimage 2012; 63:581-90. [PMID: 22796987 DOI: 10.1016/j.neuroimage.2012.06.077] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Revised: 06/26/2012] [Accepted: 06/28/2012] [Indexed: 11/16/2022] Open
Abstract
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties.
Collapse
Affiliation(s)
- John Herrington
- Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA.
| | | | | | | | | |
Collapse
|
21
|
Downing PE, Peelen MV. The role of occipitotemporal body-selective regions in person perception. Cogn Neurosci 2011; 2:186-203. [PMID: 24168534 DOI: 10.1080/17588928.2011.582945] [Citation(s) in RCA: 126] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Attention, biological motion, and action recognition. Neuroimage 2011; 59:4-13. [PMID: 21640836 DOI: 10.1016/j.neuroimage.2011.05.044] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2011] [Revised: 05/15/2011] [Accepted: 05/16/2011] [Indexed: 12/22/2022] Open
Abstract
Interacting with others in the environment requires that we perceive and recognize their movements and actions. Neuroimaging and neuropsychological studies have indicated that a number of brain regions, particularly the superior temporal sulcus, are involved in a number of processes essential for action recognition, including the processing of biological motion and processing the intentions of actions. We review the behavioral and neuroimaging evidence suggesting that while some aspects of action recognition might be rapid and effective, they are not necessarily automatic. Attention is particularly important when visual information about actions is degraded or ambiguous, or if competing information is present. We present evidence indicating that neural responses associated with the processing of biological motion are strongly modulated by attention. In addition, behavioral and neuroimaging evidence shows that drawing inferences from the actions of others is attentionally demanding. The role of attention in action observation has implications for everyday social interactions and workplace applications that depend on observing, understanding and interpreting actions.
Collapse
|