1
|
Chouinard B, Pesquita A, Enns JT, Chapman CS. Processing of visual social-communication cues during a social-perception of action task in autistic and non-autistic observers. Neuropsychologia 2024; 198:108880. [PMID: 38555063 DOI: 10.1016/j.neuropsychologia.2024.108880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/14/2024] [Accepted: 03/26/2024] [Indexed: 04/02/2024]
Abstract
Social perception and communication differ between those with and without autism, even when verbal fluency and intellectual ability are equated. Previous work found that observers responded more quickly to an actor's points if the actor had chosen by themselves where to point instead of being directed where to point. Notably, this 'choice-advantage' effect decreased across non-autistic participants as the number of autistic-like traits and tendencies increased (Pesquita et al., 2016). Here, we build on that work using the same task to study individuals over a broader range of the spectrum, from autistic to non-autistic, measuring both response initiation and mouse movement times, and considering the response to each actor separately. Autistic and non-autistic observers viewed videos of three different actors pointing to one of two locations, without knowing that the actors were sometimes freely choosing to point to one target and other times being directed where to point. All observers exhibited a choice-advantage overall, meaning they responded more rapidly when actors were freely choosing versus when they were directed, indicating a sensitivity to the actors' postural cues and movements. Our fine-grained analyses found a more robust choice-advantage to some actors than others, with autistic observers showing a choice-advantage only in response to one of the actors, suggesting that both actor and observer characteristics influence the overall effect. We briefly explore existing actor characteristics that may have contributed to this effect, finding that both duration of exposure to pre-movement cues and kinematic cues of the actors likely influence the choice advantage to different degrees across the groups. Altogether, the evidence suggested that both autistic and non-autistic individuals could detect the choice-advantage signal, but that for autistic observers the choice-advantage was actor specific. Notably, we found that the influence of the signal, when present, was detected early for all actors by the non-autistic observers, but detected later and only for one actor by the autistic observers. Altogether, we have more accurately characterized the ability of social-perception in autistic individuals as intact, but highlighted that detection of signal is likely delayed/distributed compared to non-autistic observers and that it is important to investigate actor characteristics that may influence detection and use of their social-perception signals.
Collapse
Affiliation(s)
| | | | - J T Enns
- University of British Columbia, Canada
| | | |
Collapse
|
2
|
Vikhanova A, Tibber MS, Mareschal I. Post-migration living difficulties and poor mental health associated with increased interpretation bias for threat. Q J Exp Psychol (Hove) 2024; 77:1154-1168. [PMID: 37477179 PMCID: PMC11103921 DOI: 10.1177/17470218231191442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/30/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Previous research has found associations between mental health difficulties and interpretation biases, including heightened interpretation of threat from neutral or ambiguous stimuli. Building on this research, we explored associations between interpretation biases (positive and negative) and three constructs that have been linked to migrant experience: mental health symptoms (Global Severity Index [GSI]), Post-Migration Living Difficulties (PMLD), and Perceived Ethnic Discrimination Questionnaire (PEDQ). Two hundred thirty students who identified as first- (n = 94) or second-generation ethnic minority migrants (n = 68), and first-generation White migrants (n = 68) completed measures of GSI, PEDQ, and PMLD. They also performed an interpretation bias task using Point Light Walkers (PLW), dynamic stimuli with reduced visual input that are easily perceived as humans performing an action. Five categories of PLW were used: four that clearly depicted human forms undertaking positive, neutral, negative, or ambiguous actions, and a fifth that involved scrambled animations with no clear action or form. Participants were asked to imagine their interaction with the stimuli and rate their friendliness (positive interpretation bias) and aggressiveness (interpretation bias for threat). We found that the three groups differed on PEDQ and PMLD, with no significant differences in GSI, and the three measured were positively correlated. Poorer mental health and increased PMLD were associated with a heightened interpretation for threat of scrambled animations only. These findings have implications for understanding of the role of threat biases in mental health and the migrant experience.
Collapse
Affiliation(s)
- Anastasia Vikhanova
- Department of Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| | - Marc S Tibber
- Research Department of Clinical, Educational and Health Psychology, University College London, London, UK
| | - Isabelle Mareschal
- Department of Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
3
|
Roberti E, Turati C, Actis-Grosso R. Single point motion kinematics convey emotional signals in children and adults. PLoS One 2024; 19:e0301896. [PMID: 38598520 PMCID: PMC11006184 DOI: 10.1371/journal.pone.0301896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/25/2024] [Indexed: 04/12/2024] Open
Abstract
This study investigates whether humans recognize different emotions conveyed only by the kinematics of a single moving geometrical shape and how this competence unfolds during development, from childhood to adulthood. To this aim, animations in which a shape moved according to happy, fearful, or neutral cartoons were shown, in a forced-choice paradigm, to 7- and 10-year-old children and adults. Accuracy and response times were recorded, and the movement of the mouse while the participants selected a response was tracked. Results showed that 10-year-old children and adults recognize happiness and fear when conveyed solely by different kinematics, with an advantage for fearful stimuli. Fearful stimuli were also accurately identified at 7-year-olds, together with neutral stimuli, while, at this age, the accuracy for happiness was not significantly different than chance. Overall, results demonstrates that emotions can be identified by a single point motion alone during both childhood and adulthood. Moreover, motion contributes in various measures to the comprehension of emotions, with fear recognized earlier in development and more readily even later on, when all emotions are accurately labeled.
Collapse
Affiliation(s)
- Elisa Roberti
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Chiara Turati
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Rossana Actis-Grosso
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
4
|
Walsh E, Whitby J, Chen YY, Longo MR. No influence of emotional expression on size underestimation of upright faces. PLoS One 2024; 19:e0293920. [PMID: 38300951 PMCID: PMC10833517 DOI: 10.1371/journal.pone.0293920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 10/20/2023] [Indexed: 02/03/2024] Open
Abstract
Faces are a primary means of conveying social information between humans. One important factor modulating the perception of human faces is emotional expression. Face inversion also affects perception, including judgments of emotional expression, possibly through the disruption of configural processing. One intriguing inversion effect is an illusion whereby faces appear to be physically smaller when upright than when inverted. This illusion appears to be highly selective for faces. In this study, we investigated whether the emotional expression of a face (neutral, happy, afraid, and angry) modulates the magnitude of this size illusion. Results showed that for all four expressions, there was a clear bias for inverted stimuli to be judged as larger than upright ones. This demonstrates that there is no influence of emotional expression on the size underestimation of upright faces, a surprising result given that recognition of different emotional expressions is known to be affected unevenly by inversion. Results are discussed considering recent neuroimaging research which used population receptive field (pRF) mapping to investigate the neural mechanisms underlying face perception features and which may provide an explanation for how an upright face appears smaller than an inverted one. Elucidation of this effect would lead to a greater understanding of how humans communicate.
Collapse
Affiliation(s)
- Eamonn Walsh
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
- Cultural and Social Neuroscience Research Group, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Jack Whitby
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Yen-Ya Chen
- Department of Basic & Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Matthew R. Longo
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| |
Collapse
|
5
|
Christensen A, Taubert N, Huis in ’t Veld EM, de Gelder B, Giese MA. Perceptual encoding of emotions in interactive bodily expressions. iScience 2024; 27:108548. [PMID: 38161419 PMCID: PMC10755352 DOI: 10.1016/j.isci.2023.108548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/23/2023] [Accepted: 11/20/2023] [Indexed: 01/03/2024] Open
Abstract
For social species, e.g., primates, the perceptual analysis of social interactions is an essential skill for survival, emerging already early during development. While real-life emotional behavior includes predominantly interactions between conspecifics, research on the perception of emotional body expressions has primarily focused on perception of single individuals. While previous studies using point-light or video stimuli of interacting people suggest an influence of social context on the perception and neural encoding of interacting bodies, it remains entirely unknown how emotions of multiple interacting agents are perceptually integrated. We studied this question using computer animation by creating scenes with two interacting avatars whose emotional style was independently controlled. While participants had to report the emotional style of a single agent, we found a systematic influence of the emotion expressed by the other, which was consistent with the social interaction context. The emotional styles of interacting individuals are thus jointly encoded.
Collapse
Affiliation(s)
- Andrea Christensen
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| | - Nick Taubert
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| | - Elisabeth M.J. Huis in ’t Veld
- Department of Medical and Clinical Psychology, School of Social and Behavioral Sciences, Tilburg University, Tilburg, the Netherlands
| | - Beatrice de Gelder
- Brain and Emotion Laboratory, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, EV Maastricht 6229, the Netherlands
| | - Martin A. Giese
- Section Computational Sensomotorics, Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Germany
| |
Collapse
|
6
|
Peylo C, Sterner EF, Zeng Y, Friedrich EV. TMS-induced inhibition of the left premotor cortex modulates illusory social perception. iScience 2023; 26:107297. [PMID: 37559906 PMCID: PMC10407139 DOI: 10.1016/j.isci.2023.107297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 05/16/2023] [Accepted: 07/03/2023] [Indexed: 08/11/2023] Open
Abstract
Communicative actions from one person are used to predict another person's response. However, in some cases, these predictions can outweigh the processing of sensory information and lead to illusory social perception such as seeing two people interact, although only one is present (i.e., seeing a Bayesian ghost). We applied either inhibitory brain stimulation over the left premotor cortex (i.e., real TMS) or sham TMS. Then, participants indicated the presence or absence of a masked agent that followed a communicative or individual gesture of another agent. As expected, participants had more false alarms in the communicative (i.e., Bayesian ghosts) than individual condition in the sham TMS session and this difference between conditions vanished after real TMS. In contrast to our hypothesis, the number of false alarms increased (rather than decreased) after real TMS. These pre-registered findings confirm the significance of the premotor cortex for social action predictions and illusory social perception.
Collapse
Affiliation(s)
- Charline Peylo
- Department of Psychology / Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, 80802 Bavaria, Germany
| | - Elisabeth F. Sterner
- Department of Psychology / Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, 80802 Bavaria, Germany
- Department of Diagnostic and Interventional Neuroradiology / School of Medicine, Technical University of Munich, Munich, 81675 Bavaria, Germany
| | - Yifan Zeng
- Department of Psychology / Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, 80802 Bavaria, Germany
| | - Elisabeth V.C. Friedrich
- Department of Psychology / Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, 80802 Bavaria, Germany
| |
Collapse
|
7
|
Wang R, Lu X, Jiang Y. Distributed and hierarchical neural encoding of multidimensional biological motion attributes in the human brain. Cereb Cortex 2023; 33:8510-8522. [PMID: 37118887 PMCID: PMC10786095 DOI: 10.1093/cercor/bhad136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 04/30/2023] Open
Abstract
The human visual system can efficiently extract distinct physical, biological, and social attributes (e.g. facing direction, gender, and emotional state) from biological motion (BM), but how these attributes are encoded in the brain remains largely unknown. In the current study, we used functional magnetic resonance imaging to investigate this issue when participants viewed multidimensional BM stimuli. Using multiple regression representational similarity analysis, we identified distributed brain areas, respectively, related to the processing of facing direction, gender, and emotional state conveyed by BM. These brain areas are governed by a hierarchical structure in which the respective neural encoding of facing direction, gender, and emotional state is modulated by each other in descending order. We further revealed that a portion of the brain areas identified in representational similarity analysis was specific to the neural encoding of each attribute and correlated with the corresponding behavioral results. These findings unravel the brain networks for encoding BM attributes in consideration of their interactions, and highlight that the processing of multidimensional BM attributes is recurrently interactive.
Collapse
Affiliation(s)
- Ruidi Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Xiqian Lu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
- Chinese Institute for Brain Research, 26 Science Park Road, Beijing 102206, China
| |
Collapse
|
8
|
Preißler L, Keck J, Krüger B, Munzert J, Schwarzer G. Recognition of emotional body language from dyadic and monadic point-light displays in 5-year-old children and adults. J Exp Child Psychol 2023; 235:105713. [PMID: 37331307 DOI: 10.1016/j.jecp.2023.105713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/13/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023]
Abstract
Most child studies on emotion perception used faces and speech as emotion stimuli, but little is known about children's perception of emotions conveyed by body movements, that is, emotional body language (EBL). This study aimed to investigate whether processing advantages for positive emotions in children and negative emotions in adults found in studies on emotional face and term perception also occur in EBL perception. We also aimed to uncover which specific movement features of EBL contribute to emotion perception from interactive dyads compared with noninteractive monads in children and adults. We asked 5-year-old children and adults to categorize happy and angry point-light displays (PLDs), presented as pairs (dyads) and single actors (monads), in a button-press task. By applying representational similarity analyses, we determined intra- and interpersonal movement features of the PLDs and their relation to the participants' emotional categorizations. Results showed significantly higher recognition of happy PLDs in 5-year-olds and of angry PLDs in adults in monads but not in dyads. In both age groups, emotion recognition depended significantly on kinematic and postural movement features such as limb contraction and vertical movement in monads and dyads, whereas in dyads recognition also relied on interpersonal proximity measures such as interpersonal distance. Thus, EBL processing in monads seems to undergo a similar developmental shift from a positivity bias to a negativity bias, as was previously found for emotional faces and terms. Despite these age-specific processing biases, children and adults seem to use similar movement features in EBL processing.
Collapse
Affiliation(s)
- Lucie Preißler
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany.
| | - Johannes Keck
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Sport Science, Justus Liebig University Giessen, 35394 Gießen, Germany
| | - Gudrun Schwarzer
- Department of Developmental Psychology, Justus Liebig University Giessen, 35394 Gießen, Germany
| |
Collapse
|
9
|
Francisco V, Decatoire A, Bidet-Ildei C. Action observation and motor learning: The role of action observation in learning judo techniques. Eur J Sport Sci 2023; 23:319-329. [PMID: 35098899 DOI: 10.1080/17461391.2022.2036816] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Within the theoretical framework of embodied cognition, several experiments have shown the existence of links between action observation and motor learning. Our aim was to assess the effectiveness of an observational learning protocol (action observation training: AOT) of point-light-display (PLD) in judoka. Twenty participants were given 7 days to learn Go-No-Sen. During this time period, all of the participants received conventional kata training consisting of Uchi-komi and Nage-komi (repetition of techniques) on tatami. In addition to this conventional learning, the experimental group watched 5 min of PLD video representing the different kata techniques, whereas the control group watched neutral videos during the same time period. After the learning period, both the qualitative and biomechanical performances on the kata and the transfer abilities were assessed. The results showed better biomechanical performance and transfer ability in the experimental group than in the control group. Therefore, this first experiment suggests that observational learning of PLD may be beneficial for the acquisition of judo techniques. Future experiments will be needed to specify the mechanisms that are involved in this effect.
Collapse
Affiliation(s)
- Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Centre National de la Recherche Scientifique (CNRS), Université de Poitiers, Université de Tours, Poitiers, France
| | - Arnaud Decatoire
- Centre National de la Recherche Scientifique, Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Poitiers, France
| | - Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Centre National de la Recherche Scientifique (CNRS), Université de Poitiers, Université de Tours, Poitiers, France
| |
Collapse
|
10
|
Bidet-Ildei C, Francisco V, Decatoire A, Pylouster J, Blandin Y. PLAViMoP database: A new continuously assessed and collaborative 3D point-light display dataset. Behav Res Methods 2023; 55:694-715. [PMID: 35441360 DOI: 10.3758/s13428-022-01850-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/08/2022]
Abstract
It was more than 45 years ago that Gunnar Johansson invented the point-light display technique. This showed for the first time that kinematics is crucial for action recognition, and that humans are very sensitive to their conspecifics' movements. As a result, many of today's researchers use point-light displays to better understand the mechanisms behind this recognition ability. In this paper, we propose PLAViMoP, a new database of 3D point-light displays representing everyday human actions (global and fine-motor control movements), sports movements, facial expressions, interactions, and robotic movements. Access to the database is free, at https://plavimop.prd.fr/en/motions . Moreover, it incorporates a search engine to facilitate action retrieval. In this paper, we describe the construction, functioning, and assessment of the PLAViMoP database. Each sequence was analyzed according to four parameters: type of movement, movement label, sex of the actor, and age of the actor. We provide both the mean scores for each assessment of each point-light display, and the comparisons between the different categories of sequences. Our results are discussed in the light of the literature and the suitability of our stimuli for research and applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France.
- MSHS, Bâtiment A5, 5 rue Théodore Lefebvre TSA 21103, 86073, Poitiers, Cedex 9, France.
| | - Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Arnaud Decatoire
- Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Centre National de la Recherche Scientifique, Poitiers, France
| | - Jean Pylouster
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Yannick Blandin
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| |
Collapse
|
11
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
12
|
Bachmann J, Krüger B, Keck J, Munzert J, Zabicki A. When the timing is right: The link between temporal coupling in dyadic interactions and emotion recognition. Cognition 2022; 229:105267. [PMID: 36058018 DOI: 10.1016/j.cognition.2022.105267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 08/05/2022] [Accepted: 08/25/2022] [Indexed: 11/19/2022]
Abstract
Affective states can be understood as dynamic interpersonal processes developing over time and space. When we observe emotional interactions performed by other individuals, our visual system anticipates how the action will unfold. Thus, it has been proposed that the process of emotion perception is not only a simulative but also a predictive process - a phenomenon described as interpersonal predictive coding. The present study investigated whether the recognition of emotions from dyadic interactions depends on a fixed spatiotemporal coupling of the agents. We used an emotion recognition task to manipulate the actions of two interacting point-light figures by implementing different temporal offsets that delayed the onset of one of the agent's actions (+0 ms, +500 ms, +1000 ms or + 2000 ms). Participants had to determine both the subjective valence and the emotion category (happiness, anger, sadness, affection) of the interaction. Results showed that temporal decoupling had a critical effect on both emotion recognition and the subjective impression of valence intensity: Both measures decreased with increasing temporal offset. However, these effects were dependent on which emotion was displayed. Whereas affection and anger sequences were impacted by the temporal manipulation, happiness and sadness were not. To further investigate these effects, we conducted post-hoc exploratory analyses of interpersonal movement parameters. Our findings complement and extend previous evidence by showing that the complex, noncoincidental coordination of actions within dyadic interactions results in a meaningful movement pattern and might serve as a fundamental factor in both detecting and understanding complex actions during human interaction.
Collapse
Affiliation(s)
- Julia Bachmann
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany.
| | - Britta Krüger
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| | - Johannes Keck
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), Philipps University of Marburg and Justus Liebig University Giessen, Germany
| | - Adam Zabicki
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
13
|
Pavlova MA, Romagnano V, Kubon J, Isernia S, Fallgatter AJ, Sokolov AN. Ties between reading faces, bodies, eyes, and autistic traits. Front Neurosci 2022; 16:997263. [PMID: 36248653 PMCID: PMC9554539 DOI: 10.3389/fnins.2022.997263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/12/2022] [Indexed: 11/29/2022] Open
Abstract
While reading covered with masks faces during the COVID-19 pandemic, for efficient social interaction, we need to combine information from different sources such as the eyes (without faces hidden by masks) and bodies. This may be challenging for individuals with neuropsychiatric conditions, in particular, autism spectrum disorders. Here we examined whether reading of dynamic faces, bodies, and eyes are tied in a gender-specific way, and how these capabilities are related to autistic traits expression. Females and males accomplished a task with point-light faces along with a task with point-light body locomotion portraying different emotional expressions. They had to infer emotional content of displays. In addition, participants were administered the Reading the Mind in the Eyes Test, modified and Autism Spectrum Quotient questionnaire. The findings show that only in females, inferring emotions from dynamic bodies and faces are firmly linked, whereas in males, reading in the eyes is knotted with face reading. Strikingly, in neurotypical males only, accuracy of face, body, and eyes reading was negatively tied with autistic traits. The outcome points to gender-specific modes in social cognition: females rely upon merely dynamic cues while reading faces and bodies, whereas males most likely trust configural information. The findings are of value for examination of face and body language reading in neuropsychiatric conditions, in particular, autism, most of which are gender/sex-specific. This work suggests that if male individuals with autistic traits experience difficulties in reading covered with masks faces, these deficits may be unlikely compensated by reading (even dynamic) bodies and faces. By contrast, in females, reading covered faces as well as reading language of dynamic bodies and faces are not compulsorily connected to autistic traits preventing them from paying high costs for maladaptive social interaction.
Collapse
Affiliation(s)
- Marina A. Pavlova
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- *Correspondence: Marina A. Pavlova,
| | - Valentina Romagnano
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Julian Kubon
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Sara Isernia
- IRCCS Fondazione Don Carlo Gnocchi ONLUS, Milan, Italy
| | - Andreas J. Fallgatter
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Alexander N. Sokolov
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
14
|
Keck J, Zabicki A, Bachmann J, Munzert J, Krüger B. Decoding spatiotemporal features of emotional body language in social interactions. Sci Rep 2022; 12:15088. [PMID: 36064559 PMCID: PMC9445068 DOI: 10.1038/s41598-022-19267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 08/26/2022] [Indexed: 11/11/2022] Open
Abstract
How are emotions perceived through human body language in social interactions? This study used point-light displays of human interactions portraying emotional scenes (1) to examine quantitative intrapersonal kinematic and postural body configurations, (2) to calculate interaction-specific parameters of these interactions, and (3) to analyze how far both contribute to the perception of an emotion category (i.e. anger, sadness, happiness or affection) as well as to the perception of emotional valence. By using ANOVA and classification trees, we investigated emotion-specific differences in the calculated parameters. We further applied representational similarity analyses to determine how perceptual ratings relate to intra- and interpersonal features of the observed scene. Results showed that within an interaction, intrapersonal kinematic cues corresponded to emotion category ratings, whereas postural cues reflected valence ratings. Perception of emotion category was also driven by interpersonal orientation, proxemics, the time spent in the personal space of the counterpart, and the motion–energy balance between interacting people. Furthermore, motion–energy balance and orientation relate to valence ratings. Thus, features of emotional body language are connected with the emotional content of an observed scene and people make use of the observed emotionally expressive body language and interpersonal coordination to infer emotional content of interactions.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany. .,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany.
| | - Adam Zabicki
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Julia Bachmann
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany.,Center for Mind, Brain and Behavior-CMBB, Universities Marburg and Giessen, Marburg, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
15
|
Ciardo F, De Tommaso D, Wykowska A. Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test. Sci Robot 2022; 7:eabo1241. [PMID: 35895925 DOI: 10.1126/scirobotics.abo1241] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Variability is a property of biological systems, and in animals (including humans), behavioral variability is characterized by certain features, such as the range of variability and the shape of its distribution. Nevertheless, only a few studies have investigated whether and how variability features contribute to the ascription of humanness to robots in a human-robot interaction setting. Here, we tested whether two aspects of behavioral variability, namely, the standard deviation and the shape of distribution of reaction times, affect the ascription of humanness to robots during a joint action scenario. We designed an interactive task in which pairs of participants performed a joint Simon task with an iCub robot placed by their side. Either iCub could perform the task in a preprogrammed manner, or its button presses could be teleoperated by the other member of the pair, seated in the other room. Under the preprogrammed condition, the iCub pressed buttons with reaction times falling within the range of human variability. However, the distribution of the reaction times did not resemble a human-like shape. Participants were sensitive to humanness, because they correctly detected the human agent above chance level. When the iCub was controlled by the computer program, it passed our variation of a nonverbal Turing test. Together, our results suggest that hints of humanness, such as the range of behavioral variability, might be used by observers to ascribe humanness to a humanoid robot.
Collapse
Affiliation(s)
- F Ciardo
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| | - D De Tommaso
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| | - A Wykowska
- Social Cognition in Human-Robot Interaction, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
16
|
No faces, just body movements—Effects of perceived emotional valence of body kinetics and psychological factors on interpersonal distance behavior within an immersive virtual environment. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03082-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Abstract
In an immersive virtual environment, it was investigated how the perception of body kinetics contributes to social distance behavior when the facial expression and other physical properties of a social interaction partner cannot be perceived. Based on point light displays, both the subject and the social interaction partner were depicted as stick figures, both moving simultaneously in the same space. In addition, the effects of relevant psychological factors of the perceiver on social distance behavior were examined. The results were consistent with those from studies with facial expressions or realistic full-body interactants. A greater distance was maintained from characters with emotionally negative expressions of body kinetics. Stationary objects stimuli, which were also included in the study, were mostly passed closer than neutral agents. However, the results are not entirely clear and require further investigation. Depressive symptom burden and factors mainly related to anxiety and avoidance showed effects on social distance in an IVE. The CID, a test often used to assess the interpersonal distance at which a person is comfortable, correlated with that overt behavior. In summary, the results of the study provide experimental evidence that the perception of body kinetics has a similarly significant influence on the regulation of social distance as, for example, facial affect. Implementing this study in real life would be incredibly complex, if not impossible. It is interesting to see that the comparatively simple method used in this study to create and operate an immersive virtual environment turned out to be suitable for studying at least simple types of social behavior based on body movements.
Collapse
|
17
|
|
18
|
Kim G, Seong SH, Hong SS, Choi E. Impact of face masks and sunglasses on emotion recognition in South Koreans. PLoS One 2022; 17:e0263466. [PMID: 35113970 PMCID: PMC8812856 DOI: 10.1371/journal.pone.0263466] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/19/2022] [Indexed: 01/30/2023] Open
Abstract
Due to the prolonged COVID-19 pandemic, wearing masks has become essential for social interaction, disturbing emotion recognition in daily life. In the present study, a total of 39 Korean participants (female = 20, mean age = 24.2 years) inferred seven emotions (happiness, surprise, fear, sadness, disgust, anger, surprise, and neutral) from uncovered, mask-covered, sunglasses-covered faces. The recognition rates were the lowest under mask conditions, followed by the sunglasses and uncovered conditions. In identifying emotions, different emotion types were associated with different areas of the face. Specifically, the mouth was the most critical area for happiness, surprise, sadness, disgust, and anger recognition, but fear was most recognized from the eyes. By simultaneously comparing faces with different parts covered, we were able to more accurately examine the impact of different facial areas on emotion recognition. We discuss the potential cultural differences and the ways in which individuals can cope with communication in which facial expressions are paramount.
Collapse
Affiliation(s)
- Garam Kim
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
| | - So Hyun Seong
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
| | - Seok-Sung Hong
- Department of IT Psychology, Ajou University, Yeongtong-gu, Suwon, South Korea
| | - Eunsoo Choi
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
- * E-mail:
| |
Collapse
|
19
|
Bieńkiewicz MMN, Smykovskyi AP, Olugbade T, Janaqi S, Camurri A, Bianchi-Berthouze N, Björkman M, Bardy BG. Bridging the gap between emotion and joint action. Neurosci Biobehav Rev 2021; 131:806-833. [PMID: 34418437 DOI: 10.1016/j.neubiorev.2021.08.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/08/2021] [Accepted: 08/13/2021] [Indexed: 11/17/2022]
Abstract
Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies.
Collapse
Affiliation(s)
- Marta M N Bieńkiewicz
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| | - Andrii P Smykovskyi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | - Stefan Janaqi
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France
| | | | | | | | - Benoît G Bardy
- EuroMov Digital Health in Motion, Univ. Montpellier IMT Mines Ales, Montpellier, France.
| |
Collapse
|
20
|
Peng W, Cracco E, Troje NF, Brass M. Does anxiety induced by social interaction influence the perception of bistable biological motion? Acta Psychol (Amst) 2021; 215:103277. [PMID: 33640594 DOI: 10.1016/j.actpsy.2021.103277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 10/22/2020] [Accepted: 02/15/2021] [Indexed: 11/18/2022] Open
Abstract
When observing point light walkers orthographically projected onto a frontoparallel plane, the direction in which they are walking is ambiguous. Nevertheless, observers more often perceive them as facing towards than as facing away from them. This phenomenon is known as the "facing-the-viewer bias" (FTV). Two interpretations of the facing-the-viewer bias exist in the literature: a top-down and a bottom-up interpretation. Support for the top-down interpretation comes from evidence that social anxiety correlates with the FTV bias. However, the direction of the relationship between the FTV bias and social anxiety is inconsistent across studies and evidence for a correlation has mostly been obtained with relatively small samples. Therefore, the first aim of the current study was to provide a strong test of the hypothesized relationship between social anxiety and the facing-the-viewer bias in a large sample of 200 participants recruited online. In addition, a second aim was to further extend top-down accounts by investigating if the FTV bias is also related to autistic traits. Our results replicate the FTV bias, showing that people indeed tend to perceive orthographically projected point light walkers as facing towards them. However, no correlation between the FTV bias and social interaction anxiety (tau = -0.01, p = .86, BF = 0.18) or autistic traits (tau = -0.0039, p = .45, BF = 0.18) was found. As such, our data cannot confirm the top-down interpretation of the facing-the-viewer bias.
Collapse
Affiliation(s)
- Wei Peng
- Department of Experimental Psychology, Ghent University, Ghent, Belgium.
| | - Emiel Cracco
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Nikolaus F Troje
- Department of Biology, Centre for Vision Research, York University, Toronto, ON, Canada
| | - Marcel Brass
- Department of Experimental Psychology, Ghent University, Ghent, Belgium; Berlin School of Mind and Brain/Department of Psychology, Humboldt University of Berlin, Germany
| |
Collapse
|
21
|
Della-Torre ME, Zavagno D, Actis-Grosso R. The Interpretation of E-Motions in Faces and Bodies Derived from Static Artworks by Individuals with High Functioning Autistic Spectrum. Vision (Basel) 2021; 5:vision5020017. [PMID: 33805957 PMCID: PMC8103258 DOI: 10.3390/vision5020017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/19/2021] [Indexed: 11/29/2022] Open
Abstract
E-motions are defined as those affective states the expressions of which—conveyed either by static faces or body posture—embody a dynamic component and, consequently, convey a higher sense of dynamicity than other emotional expressions. An experiment is presented, aimed at testing whether e-motions are perceived as such also by individuals with autism spectrum disorders (ASDs), which have been associated with impairments in emotion recognition and in motion perception. To this aim we replicate with ASD individuals a study, originally conducted with typically developed individuals (TDs), in which we showed to both ASD and TD participants 14 bodiless heads and 14 headless bodies taken from eleven static artworks and four drawings. The Experiment was divided into two sessions. In Session 1 participants were asked to freely associate each stimulus to an emotion or an affective state (Task 1, option A); if they were unable to find a specific emotion, the experimenter showed them a list of eight possible emotions (words) and asked them to choose one from such list, that best described the affective state portrayed in the image (Task 1, option B). After their choice, they were asked to rate the intensity of the perceived emotion on a seven point Likert scale (Task 2). In Session 2 participants were requested to evaluate the degree of dynamicity conveyed by each stimulus on a 7 point Likert scale. Results showed that ASDs and TDs shared a similar range of verbal expressions defining emotions; however, ASDs (i) showed an impairment in the ability to spontaneously assign an emotion to a headless body, and (ii) they more frequently used terms denoting negative emotions (for both faces and bodies) as compared to neutral emotions, which in turn were more frequently used by TDs. No difference emerged between the two groups for positive emotions, with happiness being the emotion better recognized in both faces and in bodies. Although overall there are no significant differences between the two groups with respect to the emotions assigned to the images and the degree of perceived dynamicity, the interaction Artwork x Group showed that for some images ASDs assigned a different value than TDs to perceived dynamicity. Moreover, two images were interpreted by ASDs as conveying completely different emotions than those perceived by TDs. Results are discussed in light of the ability of ASDs to resolve ambiguity, and of possible different cognitive styles characterizing the aesthetical/emotional experience.
Collapse
|
22
|
Bachmann J, Zabicki A, Gradl S, Kurz J, Munzert J, Troje NF, Krueger B. Does co-presence affect the way we perceive and respond to emotional interactions? Exp Brain Res 2021; 239:923-936. [PMID: 33427949 PMCID: PMC7943523 DOI: 10.1007/s00221-020-06020-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 12/15/2020] [Indexed: 11/06/2022]
Abstract
This study compared how two virtual display conditions of human body expressions influenced explicit and implicit dimensions of emotion perception and response behavior in women and men. Two avatars displayed emotional interactions (angry, sad, affectionate, happy) in a "pictorial" condition depicting the emotional interactive partners on a screen within a virtual environment and a "visual" condition allowing participants to share space with the avatars, thereby enhancing co-presence and agency. Subsequently to stimulus presentation, explicit valence perception and response tendency (i.e. the explicit tendency to avoid or approach the situation) were assessed on rating scales. Implicit responses, i.e. postural and autonomic responses towards the observed interactions were measured by means of postural displacement and changes in skin conductance. Results showed that self-reported presence differed between pictorial and visual conditions, however, it was not correlated with skin conductance responses. Valence perception was only marginally influenced by the virtual condition and not at all by explicit response behavior. There were gender-mediated effects on postural response tendencies as well as gender differences in explicit response behavior but not in valence perception. Exploratory analyses revealed a link between valence perception and preferred behavioral response in women but not in men. We conclude that the display condition seems to influence automatic motivational tendencies but not higher level cognitive evaluations. Moreover, intragroup differences in explicit and implicit response behavior highlight the importance of individual factors beyond gender.
Collapse
Affiliation(s)
- Julia Bachmann
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany.
| | - Adam Zabicki
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| | - Stefan Gradl
- Machine Learning and Data Analysis Lab, Faculty of Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| | - Johannes Kurz
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| | - Jörn Munzert
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University of Marburg and Justus Liebig University, Giessen, Germany
| | - Nikolaus F Troje
- BioMotionLab, Department of Biology and Centre for Vision Research, York University Toronto, Toronto, Canada
| | - Britta Krueger
- NeuroMotor Behavior Lab, Department of Psychology and Sport Science, Justus-Liebig-University, Giessen, Germany
| |
Collapse
|
23
|
Perceptions of coordinated movement. Hum Mov Sci 2020; 74:102711. [PMID: 33171386 DOI: 10.1016/j.humov.2020.102711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 09/22/2020] [Accepted: 10/29/2020] [Indexed: 11/24/2022]
Abstract
BACKGROUND Humans are highly social creatures who use others' movements to evaluate their social competencies. Smooth movement specifically signals an attractive, trustworthy or competent person. Those with Developmental Coordination Disorder (DCD), have peer relationship difficulties and lower sociometric preference scores. However, the relationship of perception of poor movement coordination to stereotyping has not been directly demonstrated. AIM We aimed to describe typically developing individuals' social stereotyping of individuals with and without DCD from minimal visual cues. METHOD 3D motion capture tracked the movement of four 'targets' (two adult males with DCD and two male controls) in a variety of everyday scenarios. Kinematic footage of the target's movements was presented as a point-light-display to 319 typically developing adults who used The Rating Scale of Social Competence to report perceptions of the target's social competencies. RESULTS Targets with DCD were rated as having significantly lower social competence (M = 3.37, SD = 0.93) than controls (M = 3.46, SD = 0.89) t(269) = -5.656; p < 0.001, Cohen's d = 0.34. DISCUSSION Humans incorporate minimal information on movement fluency to evaluate others' social competencies, including individuals with DCD. Such stereotyping may be automatic and may be an ill-understood mechanism sustaining persistent rejection by peers for individuals with DCD and higher rates of loneliness, isolation and mental disorders. In addition, our study expands research on competence-based stereotyping to a new applied domain, confirming the minimal cues needed to initiate stereotyping of the competencies of others.
Collapse
|
24
|
|
25
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Chen S, Jiang X, Guo S, Zhao J, Wang Y, Wang B, Liu S, Luo W. Kinematic dataset of actors expressing emotions. Sci Data 2020; 7:292. [PMID: 32901035 PMCID: PMC7478954 DOI: 10.1038/s41597-020-00635-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/07/2020] [Indexed: 11/09/2022] Open
Abstract
Human body movements can convey a variety of emotions and even create advantages in some special life situations. However, how emotion is encoded in body movements has remained unclear. One reason is that there is a lack of public human body kinematic dataset regarding the expressing of various emotions. Therefore, we aimed to produce a comprehensive dataset to assist in recognizing cues from all parts of the body that indicate six basic emotions (happiness, sadness, anger, fear, disgust, surprise) and neutral expression. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (half male) completed performances according to the standardized guidance and preferred daily events. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. To our knowledge, this is now the largest emotional kinematic dataset of the human body. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Bin Zhan
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Xiuhao Jiang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shuai Guo
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Jiafeng Zhao
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Yang Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Bin Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China.
| |
Collapse
|
26
|
Marmolejo-Ramos F, Murata A, Sasaki K, Yamada Y, Ikeda A, Hinojosa JA, Watanabe K, Parzuchowski M, Tirado C, Ospina R. Your Face and Moves Seem Happier When I Smile. Exp Psychol 2020; 67:14-22. [PMID: 32394814 DOI: 10.1027/1618-3169/a000470] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
In this experiment, we replicated the effect of muscle engagement on perception such that the recognition of another's facial expressions was biased by the observer's facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multilab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the "pen-in-the-teeth" condition, participants tended to lower their threshold of perception of happy expressions in facial stimuli compared to the "no-pen" condition, thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive happy walkers in the pen-in-the-teeth condition compared to the no-pen condition. This pattern of results was also found in a second experiment in which the no-pen condition was replaced by a situation in which participants held a pen in their lips ("pen-in-lips" condition). These results suggested that facial muscular activity alters the recognition of not only facial expressions but also bodily expressions.
Collapse
Affiliation(s)
| | - Aiko Murata
- NTT Communication Science Laboratories, Kyoto, Japan
| | - Kyoshiro Sasaki
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan.,Faculty of Arts and Science, Kyushu University, Fukuoka, Japan.,Japan Society for the Promotion of Science, Tokyo, Japan
| | - Yuki Yamada
- Faculty of Arts and Science, Kyushu University, Fukuoka, Japan
| | - Ayumi Ikeda
- Graduate School of Human-Environment Studies, Kyushu University, Japan
| | - José A Hinojosa
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, Spain.,Dpto. Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Spain.,Facultad de Lenguas y Educación, Universidad de Nebrija, Madrid, Spain
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan.,Art & Design, University of New South Wales, Australia
| | - Michal Parzuchowski
- Centre of Research on Cognition and Behaviour, SWPS University of Social Sciences and Humanities, Sopot, Poland
| | - Carlos Tirado
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Sweden
| | - Raydonal Ospina
- Departamento de Estatística, CAST Laboratory, Universidade Federal de Pernambuco, Brazil
| |
Collapse
|
27
|
Zhai S, Ma Y, Gao Z, He J. Development of interactive biological motion perception in preschoolers and its relation to social competence. SOCIAL DEVELOPMENT 2020. [DOI: 10.1111/sode.12414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Shuyi Zhai
- Department of Psychology and Behavioral Sciences Zhejiang University Hangzhou P.R. China
| | - Yuxi Ma
- Department of Psychology and Behavioral Sciences Zhejiang University Hangzhou P.R. China
| | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences Zhejiang University Hangzhou P.R. China
| | - Jie He
- Department of Psychology and Behavioral Sciences Zhejiang University Hangzhou P.R. China
| |
Collapse
|
28
|
Mazzoni N, Landi I, Ricciardelli P, Actis-Grosso R, Venuti P. "Motion or Emotion? Recognition of Emotional Bodily Expressions in Children With Autism Spectrum Disorder With and Without Intellectual Disability". Front Psychol 2020; 11:478. [PMID: 32269539 PMCID: PMC7109394 DOI: 10.3389/fpsyg.2020.00478] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 03/02/2020] [Indexed: 01/03/2023] Open
Abstract
The recognition of emotional body movement (BM) is impaired in individuals with Autistic Spectrum Disorder ASD, yet it is not clear whether the difficulty is related to the encoding of body motion, emotions, or both. Besides, BM recognition has been traditionally studied using point-light displays stimuli (PLDs) and is still underexplored in individuals with ASD and intellectual disability (ID). In the present study, we investigated the recognition of happy, fearful, and neutral BM in children with ASD with and without ID. In a non-verbal recognition task, participants were asked to recognize pure-body-motion and visible-body-form stimuli (by means of point-light displays-PLDs and full-light displays-FLDs, respectively). We found that the children with ASD were less accurate than TD children in recognizing both the emotional and neutral BM, either when presented as FLDs or PLDs. These results suggest that the difficulty in understanding the observed BM may rely on atypical processing of BM information rather than emotion. Moreover, we found that the accuracy improved with age and IQ only in children with ASD without ID, suggesting that high level of cognitive resources can mediate the acquisition of compensatory mechanisms which develop with age.
Collapse
Affiliation(s)
- Noemi Mazzoni
- ODFLab - Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | - Isotta Landi
- ODFLab - Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy.,MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Paola Ricciardelli
- Department of Psychology, University of Milano - Bicocca, Milan, Italy.,Milan Centre for Neuroscience, Milan, Italy
| | - Rossana Actis-Grosso
- Department of Psychology, University of Milano - Bicocca, Milan, Italy.,Milan Centre for Neuroscience, Milan, Italy
| | - Paola Venuti
- ODFLab - Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| |
Collapse
|
29
|
Isernia S, Sokolov AN, Fallgatter AJ, Pavlova MA. Untangling the Ties Between Social Cognition and Body Motion: Gender Impact. Front Psychol 2020; 11:128. [PMID: 32116932 PMCID: PMC7016199 DOI: 10.3389/fpsyg.2020.00128] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 01/16/2020] [Indexed: 01/25/2023] Open
Abstract
We proved the viability of the general hypothesis that biological motion (BM) processing serves as a hallmark of social cognition. We assumed that BM processing and inferring emotions through BM (body language reading) are firmly linked and examined whether this tie is gender-specific. Healthy females and males completed two tasks with the same set of point-light BM displays portraying angry and neutral locomotion of female and male actors. For one task, perceivers had to indicate actor gender, while for the other, they had to infer the emotional content of locomotion. Thus, with identical visual input, we directed task demands either to BM processing or inferring of emotion. This design allows straight comparison between sensitivity to BM and recognition of emotions conveyed by the same BM. In addition, perceivers were administered a set of photographs from the Reading the Mind in the Eyes Test (RMET), with which they identified either emotional state or actor gender. Although there were no gender differences in performance on BM tasks, a tight link occurred between recognition accuracy of emotions and gender through BM in males. In females only, body language reading (both accuracy and response time) was associated with performance on the RMET. The outcome underscores gender-specific modes in visual social cognition and triggers investigation of body language reading in a wide range of neuropsychiatric disorders.
Collapse
Affiliation(s)
- Sara Isernia
- Department of Psychiatry and Psychotherapy, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
- CADITeR, IRCCS Fondazione Don Carlo Gnocchi ONLUS, Milan, Italy
| | - Alexander N. Sokolov
- Department of Psychiatry and Psychotherapy, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Andreas J. Fallgatter
- Department of Psychiatry and Psychotherapy, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Marina A. Pavlova
- Department of Psychiatry and Psychotherapy, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
30
|
Geiger A, Bente G, Lammers S, Tepest R, Roth D, Bzdok D, Vogeley K. Distinct functional roles of the mirror neuron system and the mentalizing system. Neuroimage 2019; 202:116102. [DOI: 10.1016/j.neuroimage.2019.116102] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/19/2019] [Accepted: 08/14/2019] [Indexed: 02/05/2023] Open
|
31
|
Biological motion and animacy belief induce similar effects on involuntary shifts of attention. Atten Percept Psychophys 2019; 82:1099-1111. [PMID: 31414364 DOI: 10.3758/s13414-019-01843-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Biological motion is salient to the human visual and motor systems and may be intrinsic to the perception of animacy. Evidence for the salience of visual stimuli moving with trajectories consistent with biological motion comes from studies showing that such stimuli can trigger shifts of attention in the direction of that motion. The present study was conducted to determine whether or not top-down beliefs about animacy can modify the salience of a nonbiologically moving stimulus to the visuomotor system. A nonpredictive cuing task was used in which a white dot moved from a central location toward a left- or right-sided target placeholder. The target randomly appeared at either location 200, 600, or 1,300 ms after the motion onset. Five groups of participants experienced different stimulus conditions: (1) biological motion, (2) inverted biological motion, (3) nonbiological motion, (4) animacy belief (paired with nonbiological motion), and (5) computer-generated belief (paired with nonbiological motion). Analysis of response times revealed that the motion in the biological motion and animacy belief groups, but not in the inverted and nonbiological motion groups, affected processing of the target information. These findings indicate that biological motion is salient to the visual system and that top-down beliefs regarding the animacy of the stimulus can tune the visual and motor systems to increase the salience of nonbiological motion.
Collapse
|
32
|
The Frozen Effect: Objects in motion are more aesthetically appealing than objects frozen in time. PLoS One 2019; 14:e0215813. [PMID: 31095600 PMCID: PMC6522023 DOI: 10.1371/journal.pone.0215813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 04/09/2019] [Indexed: 11/20/2022] Open
Abstract
Videos of moving faces are more flattering than static images of the same face, a phenomenon dubbed the Frozen Face Effect. This may reflect an aesthetic preference for faces viewed in a more ecological context than still photographs. In the current set of experiments, we sought to determine whether this effect is unique to facial processing, or if motion confers an aesthetic benefit to other stimulus categories as well, such as bodies and objects—that is, a more generalized ‘Frozen Effect’ (FE). If motion were the critical factor in the FE, we would expect the video of a body or object in motion to be significantly more appealing than when seen in individual, static frames. To examine this, we asked participants to rate sets of videos of bodies and objects in motion along with the still frames constituting each video. Extending the original FFE, we found that participants rated videos as significantly more flattering than each video’s corresponding still images, regardless of stimulus domain, suggesting that the FFE generalizes well beyond face perception. Interestingly, the magnitude of the FE increased with the predictability of stimulus movement. Our results suggest that observers prefer bodies and objects in motion over the same information presented in static form, and the more predictable the motion, the stronger the preference. Motion imbues objects and bodies with greater aesthetic appeal, which has implications for how one might choose to portray oneself in various social media platforms.
Collapse
|
33
|
Ross P, de Gelder B, Crabbe F, Grosbras MH. Emotion modulation of the body-selective areas in the developing brain. Dev Cogn Neurosci 2019; 38:100660. [PMID: 31128318 PMCID: PMC6969350 DOI: 10.1016/j.dcn.2019.100660] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 05/02/2019] [Accepted: 05/13/2019] [Indexed: 01/18/2023] Open
Abstract
Passive viewing fMRI task using dynamic emotional bodies and non-human objects. Adults showed increased activation in the body-selective areas compared with children. Adults also showed more activation than adolescents, but only in the right hemisphere. Crucially, we found no age differences in the emotion modulation of these areas.
Emotions are strongly conveyed by the human body and the ability to recognize emotions from body posture or movement is still developing through childhood and adolescence. To date, very few studies have explored how these behavioural observations are paralleled by functional brain development. Furthermore, currently no studies have explored the development of emotion modulation in these areas. In this study, we used functional magnetic resonance imaging (fMRI) to compare the brain activity of 25 children (age 6–11), 18 adolescents (age 12–17) and 26 adults while they passively viewed short videos of angry, happy or neutral body movements. We observed that when viewing dynamic bodies generally, adults showed higher activity than children bilaterally in the body-selective areas; namely the extra-striate body area (EBA), fusiform body area (FBA), posterior superior temporal sulcus (pSTS), as well as the amygdala (AMY). Adults also showed higher activity than adolescents, but only in the right hemisphere. Crucially, however, we found that there were no age differences in the emotion modulation of activity in these areas. These results indicate, for the first time, that despite activity selective to body perception increasing across childhood and adolescence, emotion modulation of these areas in adult-like from 7 years of age.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham, UK; Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Frances Crabbe
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Marie-Hélène Grosbras
- Laboratoire de Neurosciences Cognitives, Aix Marseille Université, Marseille, France; Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
34
|
Krüger B, Kaletsch M, Pilgramm S, Schwippert SS, Hennig J, Stark R, Lis S, Gallhofer B, Sammer G, Zentgraf K, Munzert J. Perceived Intensity of Emotional Point-Light Displays is Reduced in Subjects with ASD. J Autism Dev Disord 2019; 48:1-11. [PMID: 28864932 DOI: 10.1007/s10803-017-3286-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
One major characteristic of autism spectrum disorder (ASD) is problems with social interaction and communication. The present study explored ASD-related alterations in perceiving emotions expressed via body movements. 16 participants with ASD and 16 healthy controls observed video scenes of human interactions conveyed by point-light displays. They rated the valence of the depicted emotions in terms of their intensity and judged their confidence in their ratings. Results showed that healthy participants rated emotional interactions displaying positive emotionality as being more intense and were more confident about their ratings than ASD subjects. Results support the idea that patients with ASD have an altered perception of emotions. This extends research on subjective features (intensity, confidence) of emotion perception to the domain of emotional body movements and kinematics.
Collapse
Affiliation(s)
- Britta Krüger
- Institute for Sports Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany.
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, Germany.
| | - Morten Kaletsch
- Cognitive Neuroscience Group, Center for Psychiatry and Psychotherapy, Justus Liebig University, Giessen, Germany
| | - Sebastian Pilgramm
- Institute for Sports Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, Germany
- Institute of Psychology, University of Hildesheim, Hildesheim, Germany
| | - Sven-Sören Schwippert
- Institute for Sports Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, Germany
| | - Jürgen Hennig
- Department of Personality Psychology and Individual Differences, Justus Liebig University Giessen, Giessen, Germany
| | - Rudolf Stark
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, Germany
| | - Stefanie Lis
- Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany
| | - Bernd Gallhofer
- Cognitive Neuroscience Group, Center for Psychiatry and Psychotherapy, Justus Liebig University, Giessen, Germany
| | - Gebhard Sammer
- Cognitive Neuroscience Group, Center for Psychiatry and Psychotherapy, Justus Liebig University, Giessen, Germany
| | - Karen Zentgraf
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, Germany
- Institute for Sports Science, University of Munster, Munster, Germany
| | - Jörn Munzert
- Institute for Sports Science, Justus Liebig University Giessen, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
35
|
Corti C, Poggi G, Massimino M, Bardoni A, Borgatti R, Urgesi C. Visual perception and spatial transformation of the body in children and adolescents with brain tumor. Neuropsychologia 2018; 120:124-136. [PMID: 30359652 DOI: 10.1016/j.neuropsychologia.2018.10.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 10/09/2018] [Accepted: 10/12/2018] [Indexed: 12/24/2022]
Abstract
Representations of own and others' body play a crucial role in social interaction. While extensive knowledge has been gathered on the neuropsychological deficits affecting body representation in adult brain lesion patients, little is known on how acquired damage to a developing brain may affect this process. We tested it on pediatric brain tumor survivors, comparing the abilities of 30 children and adolescents (aged 8-16 years) surviving from a supratentorial tumor (STT) or an infratentorial tumor (ITT) in two different tasks of body representation. Thirty children with typical development (TD) served as control group. In the first task, we tested configural (body inversion effect) and holistic (composite illusion effect) processing of others' bodies. In the second task, we tested the ability to perform first-person and object-based mental spatial transformations of own body and external objects, respectively. Configural processing was spared in all patients. Conversely, ITT, but not STT patients, were impaired in the holistic processing of body stimuli. STT patients performed overall worse than both controls and ITT patients at mental spatial transformations of both own body and external objects. ITT children presented selective alteration in using the first-person transformation strategies with body stimuli. Results suggest that body-representation abilities may be heavily affected in pediatric brain tumor survivors. STTs may be associated to greater difficulties in mental visuo-spatial transformation abilities, likely reflecting damage to fronto-parietal circuits. Conversely, ITTs may be associated to specific disturbances of visual body perception abilities that require motor simulation processes, reflecting direct or indirect damage to cerebellar areas.
Collapse
Affiliation(s)
- Claudia Corti
- Scientific Institute, IRCCS E. Medea, Neuro-oncological and Neuropsychological Rehabilitation Unit, Bosisio Parini, Lecco, Italy.
| | - Geraldina Poggi
- Scientific Institute, IRCCS E. Medea, Neuro-oncological and Neuropsychological Rehabilitation Unit, Bosisio Parini, Lecco, Italy
| | - Maura Massimino
- Fondazione IRCCS Istituto Nazionale Tumori, Pediatric Oncology Unit, Milano, Italy
| | - Alessandra Bardoni
- Scientific Institute, IRCCS E. Medea, Neuro-oncological and Neuropsychological Rehabilitation Unit, Bosisio Parini, Lecco, Italy
| | - Renato Borgatti
- Scientific Institute, IRCCS E. Medea, Neuropsychiatry and Neurorehabilitation Unit, Bosisio Parini, Lecco, Italy
| | - Cosimo Urgesi
- Scientific Institute, IRCCS E. Medea, Neuropsychiatry and Neurorehabilitation Unit, Bosisio Parini, Lecco, Italy; Scientific Institute, IRCCS E. Medea, San Vito al Tagliamento, Pordenone, Italy; University of Udine, Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, Udine, Italy.
| |
Collapse
|
36
|
Lee JM, Baek J, Ju DY. Anthropomorphic Design: Emotional Perception for Deformable Object. Front Psychol 2018; 9:1829. [PMID: 30333773 PMCID: PMC6175972 DOI: 10.3389/fpsyg.2018.01829] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 09/07/2018] [Indexed: 11/21/2022] Open
Abstract
Despite the increasing number of studies on user experience (UX) and user interfaces (UI), few studies have examined emotional interaction between humans and deformable objects. In the current study, we investigated how the anthropomorphic design of a flexible display interacts with emotion. For 101 unique 3D images in which an object was bent at different axes, 281 participants were asked to report how strongly the object evoked five elemental emotions (e.g., happiness, disgust, anger, fear, and sadness) in an online survey. People rated the object’s shape using three emotional categories: happiness, disgust–anger, and sadness–fear. It was also found that a combination of axis of bending (horizontal or diagonal axis) and convexity (bending convexly or concavely) predicted emotional valence, underpinning the anthropomorphic design of flexible displays. Our findings provide empirical evidence that axis of bending and convexity can be an important antecedent of emotional interaction with flexible objects, triggering at least three types of emotion in users.
Collapse
Affiliation(s)
- Jung Min Lee
- Technology and Design Research Center, Yonsei Institute of Convergence Technology, Yonsei University, Incheon, South Korea
| | - Jongsoo Baek
- Yonsei Institute of Convergence Technology, Yonsei University, Incheon, South Korea
| | - Da Young Ju
- Technology and Design Research Center, Yonsei Institute of Convergence Technology, Yonsei University, Incheon, South Korea
| |
Collapse
|
37
|
Serrano I, Deniz O, Espinosa-Aranda JL, Bueno G. Fight Recognition in video using Hough Forests and 2D Convolutional Neural Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:4787-4797. [PMID: 29994215 DOI: 10.1109/tip.2018.2845742] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
While action recognition has become an important line of research in computer vision, the recognition of particular events such as aggressive behaviors, or fights, has been relatively less studied. These tasks may be extremely useful in several video surveillance scenarios such as psychiatric wards, prisons or even in personal camera smartphones. Their potential usability has led to a surge of interest in developing fight or violence detectors. One of the key aspects in this case is efficiency, that is, these methods should be computationally fast. "Handcrafted" spatiotemporal features that account for both motion and appearance information can achieve high accuracy rates, albeit the computational cost of extracting some of those features is still prohibitive for practical applications. The deep learning paradigm has been recently applied for the first time to this task too, in the form of a 3D Convolutional Neural Network that processes the whole video sequence as input. However, results in human perception of other's actions suggest that, in this specific task, motion features are crucial. This means that using the whole video as input may add both redundancy and noise in the learning process. In this work, we propose a hybrid "handcrafted/learned" feature framework which provides better accuracy than the previous feature learning method, with similar computational efficiency. The proposed method is compared to three related benchmark datasets. The method outperforms the different state-of-the-art methods in two of the three considered benchmark datasets.
Collapse
|
38
|
Abstract
The study of biological point-light displays (PLDs) has fascinated researchers for more than 40 years. However, the mechanisms underlying PLD perception remain unclear, partly due to difficulties with precisely controlling and transforming PLD sequences. Furthermore, little agreement exists regarding how transformations are performed. This article introduces a new free-access program called PLAViMoP (Point-Light Display Visualization and Modification Platform) and presents the algorithms for PLD transformations actually included in the software. PLAViMoP fulfills two objectives. First, it standardizes and makes clear many classical spatial and kinematic transformations described in the PLD literature. Furthermore, given its optimized interface, PLAViMOP makes these transformations easy and fast to achieve. Overall, PLAViMoP could directly help scientists avoid technical difficulties and make possible the use of PLDs for nonacademic applications.
Collapse
|
39
|
Pavlova MA, Erb M, Hagberg GE, Loureiro J, Sokolov AN, Scheffler K. "Wrong Way Up": Temporal and Spatial Dynamics of the Networks for Body Motion Processing at 9.4 T. Cereb Cortex 2018; 27:5318-5330. [PMID: 28981613 DOI: 10.1093/cercor/bhx151] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Indexed: 01/17/2023] Open
Abstract
Body motion delivers a wealth of socially relevant information. Yet display inversion severely impedes biological motion (BM) processing. It is largely unknown how the brain circuits for BM are affected by display inversion. As upright and upside-down point-light BM displays are similar, we addressed this issue by using ultrahigh field functional MRI at 9.4 T providing for high sensitivity and spatial resolution. Whole-brain analysis along with exploration of the temporal dynamics of the blood-oxygen-level-dependent response reveals that in the left hemisphere, inverted BM activates anterior networks likely engaged in decision making and cognitive control, whereas readily recognizable upright BM activates posterior areas solely. In the right hemisphere, multiple networks are activated in response to upright BM as compared with scarce activation to inversion. With identical visual input with display inversion, a large-scale network in the right hemisphere is detected in perceivers who do not constantly interpret displays as shown the "wrong way up." For the first time, we uncover (1) (multi)functional involvement of each region in the networks underpinning BM processing and (2) large-scale ensembles of regions playing in unison with distinct temporal dynamics. The outcome sheds light on the neural circuits underlying BM processing as an essential part of the social brain.
Collapse
Affiliation(s)
- Marina A Pavlova
- Department of Biomedical Magnetic Resonance, Medical School, Eberhard Karls University of Tübingen.,Department of Psychiatry and Psychotherapy, Medical School, Eberhard Karls University of Tübingen
| | - Michael Erb
- Department of Biomedical Magnetic Resonance, Medical School, Eberhard Karls University of Tübingen.,High-Field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics
| | - Gisela E Hagberg
- Department of Biomedical Magnetic Resonance, Medical School, Eberhard Karls University of Tübingen.,High-Field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics
| | - Joana Loureiro
- Department of Biomedical Magnetic Resonance, Medical School, Eberhard Karls University of Tübingen.,High-Field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics
| | - Alexander N Sokolov
- Women's Health Research Institute, Department of Women's Health, Medical School, Eberhard Karls University of Tübingen, Tübingen 72076, Germany
| | - Klaus Scheffler
- Department of Biomedical Magnetic Resonance, Medical School, Eberhard Karls University of Tübingen.,High-Field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics
| |
Collapse
|
40
|
Cattaneo L, Veroni V, Boria S, Tassinari G, Turella L. Sex Differences in Affective Facial Reactions Are Present in Childhood. Front Integr Neurosci 2018; 12:19. [PMID: 29875642 PMCID: PMC5974214 DOI: 10.3389/fnint.2018.00019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Accepted: 05/04/2018] [Indexed: 11/13/2022] Open
Abstract
Adults exposed to affective facial displays produce specific rapid facial reactions (RFRs) which are of lower intensity in males compared to females. We investigated such sex difference in a population of 60 primary school children (30 F; 30 M), aged 7–10 years. We recorded the surface electromyographic (EMG) signal from the corrugator supercilii and the zygomatici muscles, while children watched affective facial displays. Results showed the expected smiling RFR to smiling faces and the expected frowning RFR to sad faces. A systematic difference between male and female participants was observed, with boys showing less ample EMG responses than age-matched girls. We demonstrate that sex differences in the somatic component of affective motor patterns are present also in childhood.
Collapse
Affiliation(s)
- Luigi Cattaneo
- Dipartimento di Neuroscienze, Biomedicina e Movimento, University of Verona, Verona, Italy
| | - Vania Veroni
- Dipartimento di Neuroscienze, University of Parma, Parma, Italy
| | - Sonia Boria
- Dipartimento di Neuroscienze, University of Parma, Parma, Italy
| | - Giancarlo Tassinari
- Dipartimento di Neuroscienze, Biomedicina e Movimento, University of Verona, Verona, Italy
| | - Luca Turella
- Center for Mid/Brain Sciences, University of Trento, Trento, Italy
| |
Collapse
|
41
|
Witkower Z, Tracy JL. Bodily Communication of Emotion: Evidence for Extrafacial Behavioral Expressions and Available Coding Systems. EMOTION REVIEW 2018. [DOI: 10.1177/1754073917749880] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Although scientists dating back to Darwin have noted the importance of the body in communicating emotion, current research on emotion communication tends to emphasize the face. In this article we review the evidence for bodily expressions of emotions—that is, the handful of emotions that are displayed and recognized from certain bodily behaviors (i.e., pride, joy, sadness, shame, embarrassment, anger, fear, and disgust). We also review the previously developed coding systems available for identifying emotions from bodily behaviors. Although no extant coding system provides an exhaustive list of bodily behaviors known to communicate a panoply of emotions, our review provides the foundation for developing such a system.
Collapse
Affiliation(s)
- Zachary Witkower
- Department of Psychology, University of British Columbia, Canada
| | - Jessica L. Tracy
- Department of Psychology, University of British Columbia, Canada
| |
Collapse
|
42
|
Abstract
Perceptions of ambiguous biological motion are modulated by different individual cognitive abilities (such as inhibition and empathy) and emotional states (such as anxiety). This study explored facing-the-viewer bias (FTV) in perceiving ambiguous directions of biological motion, and investigated whether task-irrelevant simultaneous face emotional cues in the background and the individual social anxiety traits could affect FTV. We found that facial motion cues as background affect sociobiologically relevant scenarios, including biological motion, but not non-biological situations (conveyed through random dot motion). Individuals with high anxiety traits demonstrated a more dominant FTV bias than individuals with low anxiety traits. Ensemble coding-like processing of task-irrelevant multiple emotional cues could magnify the facing-the-viewer bias than did in the single emotional cue. Overall, those findings suggest a correlation between high-level emotional processing and high-level motion perception (subjective to attentional control) contributes to facing-the-viewer bias.
Collapse
|
43
|
Abstract
When people speak, they gesture. However, is the audience watching a speaker who is sensitive to this link? We translated the body movements of politicians into stick-figure animations and separated the visual from the audio channel. We then asked participants to match a selection of five audio tracks (including the correct one) with the stick-figure animations. The participants made correct decisions in 65% of all cases (chance level of 20%). Matching voices with animations was less difficult when politicians showed expansive movements and spoke with a loud voice. Thus, people are sensitive to the link between motion cues and vocal cues, and this link appears to become even more apparent when a speaker shows expressive behaviors. Future work will have to refine and validate the methods applied and investigate how mismatches between communication channels affect the impressions that people form of politicians.
Collapse
|
44
|
Can biological motion research provide insight on how to reduce friendly fire incidents? Psychon Bull Rev 2017; 23:1429-1439. [PMID: 26850024 DOI: 10.3758/s13423-016-1006-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to accurately detect, perceive, and recognize biological motion can be associated with a fundamental drive for survival, and it is a significant interest for perception researchers. This field examines various perceptual features of motion and has been assessed and applied in several real-world contexts (e.g., biometric, sport). Unexplored applications still exist however, including the military issue of friendly fire. There are many causes and processes leading to friendly fire and specific challenges that are associated with visual information extraction during engagement, such as brief glimpses, low acuity, camouflage, and uniform deception. Furthermore, visual information must often be processed under highly stressful (potentially threatening), time-constrained conditions that present a significant problem for soldiers. Biological motion research and anecdotal evidence from experienced combatants suggests that intentions, emotions, identities of human motion can be identified and discriminated, even when visual display is degraded or limited. Furthermore, research suggests that perceptual discriminatory capability of movement under visually constrained conditions is trainable. Therefore, given the limited military research linked to biological motion and friendly fire, an opportunity for cross-disciplinary investigations exists. The focus of this paper is twofold: first, to provide evidence for the possible link between biological motion factors and friendly fire, and second, to propose conceptual and methodological considerations and recommendations for perceptual-cognitive training within current military programs.
Collapse
|
45
|
Williams EH, Cross ES. Decreased reward value of biological motion among individuals with autistic traits. Cognition 2017; 171:1-9. [PMID: 29101779 PMCID: PMC5825385 DOI: 10.1016/j.cognition.2017.10.017] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Revised: 10/19/2017] [Accepted: 10/21/2017] [Indexed: 01/01/2023]
Abstract
The Social Motivation Theory of ASD links social impairments to reduced value of social stimuli. We evaluated the reward value of human motion among people with a range of AQ scores. Subjects value human motion more than robotic or control motion, but this preference diminishes with higher AQ scores.
The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments.
Collapse
Affiliation(s)
- Elin H Williams
- Social Brain in Action Laboratory, Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom
| | - Emily S Cross
- Social Brain in Action Laboratory, Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Wales, United Kingdom; Institute of Neuroscience and Psychology, University of Glasgow, Scotland, United Kingdom; School of Psychology, University of Glasgow, Scotland, United Kingdom.
| |
Collapse
|
46
|
Dopaminergic Modulation of Biological Motion Perception in patients with Parkinson's disease. Sci Rep 2017; 7:10159. [PMID: 28860519 PMCID: PMC5579208 DOI: 10.1038/s41598-017-10463-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/09/2017] [Indexed: 11/12/2022] Open
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder pathologically characterized by a selective loss of dopaminergic neurons in the substantia nigra. In previous studies, greater attention was paid to impairments in motor disturbances in contrast to impairments of cognitive function in PD that was often ignored. In present study, a duration discrimination paradigm was used to assess global and local biological motion (BM) perception in healthy controls(HCs) and PD patients with and without dopamine substitution treatment (DST). Biological motion sequences and inanimate motion sequences (inverted BM sequences) were sequentially presented on a screen. Observers were required to verbally make a 2-alternative forced-choice to indicate whether the first or second interval appeared longer. The stimuli involved global and local BM sequences. Statistical analyses were conducted on points of subjective equality (PSE). We found significant differences between untreated PD patients and HCs as well as differences between global and local BM conditions. PD patients have a deficit in both global and local BM perception. Nevertheless, these two BM conditions can be improved under DST. Our data indicates that BM perception may be damaged in PD patients and dopaminergic medication is conducive to maintain the BM perception in PD patients.
Collapse
|
47
|
Visual adaptation alters the apparent speed of real-world actions. Sci Rep 2017; 7:6738. [PMID: 28751645 PMCID: PMC5532221 DOI: 10.1038/s41598-017-06841-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Accepted: 06/19/2017] [Indexed: 11/09/2022] Open
Abstract
The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30 seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30 seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video (‘re-normalisation’). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed.
Collapse
|
48
|
Abstract
In humans, recognition of others' actions involves a cortical network that comprises, among other cortical regions, the posterior superior temporal sulcus (pSTS), where biological motion is coded and the anterior intraparietal sulcus (aIPS), where movement information is elaborated in terms of meaningful goal-directed actions. This action observation system (AOS) is thought to encode neutral voluntary actions, and possibly some aspects of affective motor repertoire, but the role of the AOS' areas in processing affective kinematic information has never been examined. Here we investigated whether the AOS plays a role in representing dynamic emotional bodily expressions. In the first experiment, we assessed behavioral adaptation effects of observed affective movements. Participants watched series of happy or fearful whole-body point-light displays (PLDs) as adapters and were then asked to perform an explicit categorization of the emotion expressed in test PLDs. Participants were slower when categorizing any of the two emotions as long as it was congruent with the emotion in the adapter sequence. We interpreted this effect as adaptation to the emotional content of PLDs. In the second experiment, we combined this paradigm with TMS applied over either the right aIPS, pSTS, and the right half of the occipital pole (corresponding to Brodmann's area 17 and serving as control) to examine the neural locus of the adaptation effect. TMS over the aIPS (but not over the other sites) reversed the behavioral cost of adaptation, specifically for fearful contents. This demonstrates that aIPS contains an explicit representation of affective body movements.SIGNIFICANCE STATEMENT In humans, a network of areas, the action observation system, encodes voluntary actions. However, the role of these brain regions in processing affective kinematic information has not been investigated. Here we demonstrate that the aIPS contains a representation of affective body movements. First, in a behavioral experiment, we found an adaptation after-effect for emotional PLDs, indicating the existence of a neural representation selective for affective information in biological motion. To examine the neural locus of this effect, we then combined the adaptation paradigm with TMS. Stimulation of the aIPS (but not over pSTS and control site) reversed the behavioral cost of adaptation, specifically for fearful contents, demonstrating that aIPS contains a representation of affective body movements.
Collapse
|
49
|
Aviv V. Abstracting Dance: Detaching Ourselves from the Habitual Perception of the Moving Body. Front Psychol 2017; 8:776. [PMID: 28559871 PMCID: PMC5432560 DOI: 10.3389/fpsyg.2017.00776] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 04/26/2017] [Indexed: 12/19/2022] Open
Abstract
This work explores to what extent the notion of abstraction in dance is valid and what it entails. Unlike abstraction in the fine arts that aims for a certain independence from representation of the external world through the use of non-figurative elements, dance is realized by a highly familiar object – the human body. In fact, we are all experts in recognizing the human body. For instance, we can mentally reconstruct its motion from minimal information (e.g., via a “dot display”), predict body trajectory during movement and identify emotional expressions of the body. Nonetheless, despite the presence of a human dancer on stage and our extreme familiarity with the human body, the process of abstraction is applicable also to dance. Abstract dance removes itself from familiar daily movements, violates the observer’s predictions about future movements and detaches itself from narratives. In so doing, abstract dance exposes the observer to perceptions of unfamiliar situations, thus paving the way to new interpretations of human motion and hence to perceiving ourselves differently in both the physical and emotional domains.
Collapse
Affiliation(s)
- Vered Aviv
- The Jerusalem Academy of Music and DanceJerusalem, Israel
| |
Collapse
|
50
|
Abstract
Recognising emotions from faces that are partly covered is more difficult than from fully visible faces. The focus of the present study is on the role of an Islamic versus non-Islamic context, i.e. Islamic versus non-Islamic headdress in perceiving emotions. We report an experiment that investigates whether briefly presented (40 ms) facial expressions of anger, fear, happiness and sadness are perceived differently when covered by a niqāb or turban, compared to a cap and shawl. In addition, we examined whether oxytocin, a neuropeptide regulating affection, bonding and cooperation between ingroup members and fostering outgroup vigilance and derogation, would differentially impact on emotion recognition from wearers of Islamic versus non-Islamic headdresses. The results first of all show that the recognition of happiness was more accurate when the face was covered by a Western compared to Islamic headdress. Second, participants more often incorrectly assigned sadness to a face covered by an Islamic headdress compared to a cap and shawl. Third, when correctly recognising sadness, they did so faster when the face was covered by an Islamic compared to Western headdress. Fourth, oxytocin did not modulate any of these effects. Implications for theorising about the role of group membership on emotion perception are discussed.
Collapse
Affiliation(s)
- Mariska E Kret
- a Cognitive Psychology Unit, Institute of Psychology , Leiden University , Leiden , Netherlands.,b Leiden Institute for Brain and Cognition , Leiden , Netherlands
| | - Agneta H Fischer
- c Department of Psychology , University of Amsterdam , Amsterdam , Netherlands
| |
Collapse
|