1
|
Bunce C, Gehdu BK, Press C, Gray KLH, Cook R. Autistic adults exhibit typical sensitivity to changes in interpersonal distance. Autism Res 2024. [PMID: 38828663 DOI: 10.1002/aur.3164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 05/06/2024] [Indexed: 06/05/2024]
Abstract
The visual processing differences seen in autism often impede individuals' visual perception of the social world. In particular, many autistic people exhibit poor face recognition. Here, we sought to determine whether autistic adults also show impaired perception of dyadic social interactions-a class of stimulus thought to engage face-like visual processing. Our focus was the perception of interpersonal distance. Participants completed distance change detection tasks, in which they had to make perceptual decisions about the distance between two actors. On half of the trials, participants judged whether the actors moved closer together; on the other half, whether they moved further apart. In a nonsocial control task, participants made similar judgments about two grandfather clocks. We also assessed participants' face recognition ability using standardized measures. The autistic and nonautistic observers showed similar levels of perceptual sensitivity to changes in interpersonal distance when viewing social interactions. As expected, however, the autistic observers showed clear signs of impaired face recognition. Despite putative similarities between the visual processing of faces and dyadic social interactions, our results suggest that these two facets of social vision may dissociate.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- School of Psychology, University of Leeds, Leeds, UK
| | - Bayparvah Kaur Gehdu
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Clare Press
- Department of Experimental Psychology, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds, UK
| |
Collapse
|
2
|
Charbonneau M, Curioni A, McEllin L, Strachan JWA. Flexible Cultural Learning Through Action Coordination. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:201-222. [PMID: 37458767 DOI: 10.1177/17456916231182923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
The cultural transmission of technical know-how has proven vital to the success of our species. The broad diversity of learning contexts and social configurations, as well as the various kinds of coordinated interactions they involve, speaks to our capacity to flexibly adapt to and succeed in transmitting vital knowledge in various learning contexts. Although often recognized by ethnographers, the flexibility of cultural learning has so far received little attention in terms of cognitive mechanisms. We argue that a key feature of the flexibility of cultural learning is that both the models and learners recruit cognitive mechanisms of action coordination to modulate their behavior contingently on the behavior of their partner, generating a process of mutual adaptation supporting the successful transmission of technical skills in diverse and fluctuating learning environments. We propose that the study of cultural learning would benefit from the experimental methods, results, and insights of joint-action research and, complementarily, that the field of joint-action research could expand its scope by integrating a learning and cultural dimension. Bringing these two fields of research together promises to enrich our understanding of cultural learning, its contextual flexibility, and joint action coordination.
Collapse
Affiliation(s)
- Mathieu Charbonneau
- Africa Institute for Research in Economics and Social Sciences, Université Mohammed VI Polytechnique
| | | | - Luke McEllin
- Department of Cognitive Science, Central European University
| | | |
Collapse
|
3
|
Barzy M, Morgan R, Cook R, Gray KLH. Are social interactions preferentially attended in real-world scenes? Evidence from change blindness. Q J Exp Psychol (Hove) 2023; 76:2293-2302. [PMID: 36847458 PMCID: PMC10503233 DOI: 10.1177/17470218231161044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/03/2022] [Accepted: 11/02/2022] [Indexed: 03/01/2023]
Abstract
In change detection paradigms, changes to social or animate aspects of a scene are detected better and faster compared with non-social or inanimate aspects. While previous studies have focused on how changes to individual faces/bodies are detected, it is possible that individuals presented within a social interaction may be further prioritised, as the accurate interpretation of social interactions may convey a competitive advantage. Over three experiments, we explored change detection to complex real-world scenes, in which changes either occurred by the removal of (a) an individual on their own, (b) an individual who was interacting with others, or (c) an object. In Experiment 1 (N = 50), we measured change detection for non-interacting individuals versus objects. In Experiment 2 (N = 49), we measured change detection for interacting individuals versus objects. Finally, in Experiment 3 (N = 85), we measured change detection for non-interacting versus interacting individuals. We also ran an inverted version of each task to determine whether differences were driven by low-level visual features. In Experiments 1 and 2, we found that changes to non-interacting and interacting individuals were detected better and more quickly than changes to objects. We also found inversion effects for both non-interaction and interaction changes, whereby they were detected more quickly when upright compared with inverted. No such inversion effect was seen for objects. This suggests that the high-level, social content of the images was driving the faster change detection for social versus object targets. Finally, we found that changes to individuals in non-interactions were detected faster than those presented within an interaction. Our results replicate the social advantage often found in change detection paradigms. However, we find that changes to individuals presented within social interaction configurations do not appear to be more quickly and easily detected than those in non-interacting configurations.
Collapse
Affiliation(s)
- Mahsa Barzy
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Rachel Morgan
- School of Mathematics and Statistics, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Department of Psychology, University of York, York, UK
| | - Katie LH Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
4
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
5
|
Irwantoro K, Nimsha Nilakshi Lennon N, Mareschal I, Miflah Hussain Ismail A. Contextualising facial expressions: The effect of temporal context and individual differences on classification. Q J Exp Psychol (Hove) 2023; 76:450-459. [PMID: 35360991 PMCID: PMC9896254 DOI: 10.1177/17470218221094296] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
Collapse
Affiliation(s)
- Kinenoita Irwantoro
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia,Kinenoita Irwantoro, School of Psychology, University of Nottingham Malaysia, 43500 Semenyih, Selangor, Malaysia.
| | | | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | | |
Collapse
|
6
|
Chen Y, Xu Q, Fan C, Wang Y, Jiang Y. Eye gaze direction modulates nonconscious affective contextual effect. Conscious Cogn 2022; 102:103336. [DOI: 10.1016/j.concog.2022.103336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 04/06/2022] [Accepted: 04/23/2022] [Indexed: 11/03/2022]
|
7
|
Abstract
The accurate decoding of facial emotion expressions lies at the center of many research traditions in psychology. Much of this research, while paying lip service to the importance of context in emotion perception, has used stimuli that were carefully created to be deprived of contextual information. The participants' task is to associate the expression shown in the face with a correct label, essentially changing a social perception task into a cognitive task. In fact, in many cases, the task can be carried out correctly without engaging emotion recognition at all. The present article argues that infusing context in emotion perception does not only add an additional source of information but changes the way that participants approach the task by rendering it a social perception task rather than a cognitive task. Importantly, distinguishing between accuracy (perceiving the intended emotions) and bias (perceiving additional emotions to those intended) leads to a more nuanced understanding of social emotion perception. Results from several studies that use the Assessment of Contextual Emotions demonstrate the significance and social functionality of simultaneously considering emotion decoding accuracy and bias for social interaction in different cultures, their key personality and societal correlates, and their function for close relationships processes.
Collapse
Affiliation(s)
- Ursula Hess
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
| | - Konstantinos Kafetsios
- School of Film, Aristotle University of Thessaloniki, Greece.,Katedra Psychology, Palacký University in Olomouc, Czech Republic
| |
Collapse
|
8
|
Flavell JC, Over H, Vestner T, Cook R, Tipper SP. Rapid detection of social interactions is the result of domain general attentional processes. PLoS One 2022; 17:e0258832. [PMID: 35030168 PMCID: PMC8759659 DOI: 10.1371/journal.pone.0258832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/06/2021] [Indexed: 11/19/2022] Open
Abstract
Using visual search displays of interacting and non-interacting pairs, it has been demonstrated that detection of social interactions is facilitated. For example, two people facing each other are found faster than two people with their backs turned: an effect that may reflect social binding. However, recent work has shown the same effects with non-social arrow stimuli, where towards facing arrows are detected faster than away facing arrows. This latter work suggests a primary mechanism is an attention orienting process driven by basic low-level direction cues. However, evidence for lower level attentional processes does not preclude a potential additional role of higher-level social processes. Therefore, in this series of experiments we test this idea further by directly comparing basic visual features that orient attention with representations of socially interacting individuals. Results confirm the potency of orienting of attention via low-level visual features in the detection of interacting objects. In contrast, there is little evidence for the representation of social interactions influencing initial search performance.
Collapse
Affiliation(s)
- Jonathan C. Flavell
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Harriet Over
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Tim Vestner
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Richard Cook
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Steven P. Tipper
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| |
Collapse
|
9
|
Vestner T, Flavell JC, Cook R, Tipper SP. Remembered together: Social interaction facilitates retrieval while reducing individuation of features within bound representations. Q J Exp Psychol (Hove) 2021; 75:1593-1602. [PMID: 34663133 DOI: 10.1177/17470218211056499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
When encountering social scenes, there appears to be rapid and automatic detection of social interactions. Representations of interacting people appear to be bound together via a mechanism of joint attention, which results in enhanced memory, even when participants are unaware that memory is required. However, even though access is facilitated for socially bound representations, we predicted that the individual features of these representations are less efficiently encoded, and features can therefore migrate between the constituent interacting individuals. This was confirmed in Experiment 1, where overall memory for interacting compared with non-interacting dyads was facilitated but binding of features within an individual was weak, resulting in feature migration errors. Experiment 2 demonstrated the role of conscious strategic processing, where participants were aware that memory would be tested. With such awareness, attention can be focused on individual objects allowing the binding of features. The results support an account of two forms of processing: an initial automatic social binding process where interacting individuals are represented as one episode in memory facilitating access and a further stage where attention can be focused on each individual enabling the binding of features within individual objects.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychology, University of York, York, UK
| | | | - Richard Cook
- Department of Psychology, University of York, York, UK.,Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | | |
Collapse
|
10
|
Vestner T, Over H, Gray KLH, Tipper SP, Cook R. Searching for people: Non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs. Cognition 2021; 214:104737. [PMID: 33901835 PMCID: PMC8346951 DOI: 10.1016/j.cognition.2021.104737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 03/05/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
11
|
Bonassi A, Ghilardi T, Gabrieli G, Truzzi A, Doi H, Borelli JL, Lepri B, Shinohara K, Esposito G. The Recognition of Cross-Cultural Emotional Faces Is Affected by Intensity and Ethnicity in a Japanese Sample. Behav Sci (Basel) 2021; 11:bs11050059. [PMID: 33922502 PMCID: PMC8146535 DOI: 10.3390/bs11050059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/14/2021] [Accepted: 04/16/2021] [Indexed: 02/06/2023] Open
Abstract
Human faces convey a range of emotions and psychobiological signals that support social interactions. Multiple factors potentially mediate the facial expressions of emotions across cultures. To further determine the mechanisms underlying human emotion recognition in a complex and ecological environment, we hypothesized that both behavioral and neurophysiological measures would be influenced by stimuli ethnicity (Japanese, Caucasian) in the context of ambiguous emotional expressions (mid-happy, angry). We assessed the neurophysiological and behavioral responses of neurotypical Japanese adults (N = 27, 13 males) involved in a facial expression recognition task. Results uncover an interaction between universal and culturally-driven mechanisms. No differences in behavioral responses are found between male and female participants, male and female faces, and neutral Japanese versus Caucasian faces. However, Caucasian ambiguous emotional expressions which require more energy-consuming processing, as highlighted by neurophysiological results of the Arousal Index, were judged more accurately than Japanese ones. Additionally, a differential Frontal Asymmetry Index in neuronal activation, the signature of an approach versus avoidance response, is found in male participants according to the gender and emotional valence of the stimuli.
Collapse
Affiliation(s)
- Andrea Bonassi
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto, Italy;
- Mobile and Social Computing Lab, Fondazione Bruno Kessler, 38122 Trento, Italy;
| | - Tommaso Ghilardi
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands;
| | - Giulio Gabrieli
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639818, Singapore;
| | - Anna Truzzi
- Trinity College Institute of Neuroscience, Trinity College, Dublin 2, Ireland;
| | - Hirokazu Doi
- Medical Engineering Department, Kokushikan University, Tokyo 154-8515, Japan;
| | - Jessica L. Borelli
- Department of Psychological Science, University of California, Irvine, CA 92697-7085, USA;
| | - Bruno Lepri
- Mobile and Social Computing Lab, Fondazione Bruno Kessler, 38122 Trento, Italy;
| | - Kazuyuki Shinohara
- Department of Neurology and Behavior, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki 852-8523, Japan;
| | - Gianluca Esposito
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto, Italy;
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639818, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 308222, Singapore
- Correspondence: or
| |
Collapse
|
12
|
Burns EJ, Yang W, Ying H. Friend effects framework: Contrastive and hierarchical processing in cheerleader effects. Cognition 2021; 212:104715. [PMID: 33823426 DOI: 10.1016/j.cognition.2021.104715] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 03/28/2021] [Accepted: 03/29/2021] [Indexed: 10/21/2022]
Abstract
Cheerleader effects, group attractiveness effects, and divisive normalization are all characterized by faces appearing more attractive when seen within a group. However, it is possible that your friends could have a detrimental effect upon your attractiveness too: if these group effects arose partly as a contrastive process between your face and your friends, then highly attractive friends may diminish your attractiveness. We confirm this hypothesis across two experiments by showing that the presence of highly attractive friends can indeed make you appear less attractive (i.e., a reverse cheerleader effect), suggesting friend effects are driven in part by a contrastive process against the group. However, these effects are also influenced by your own attractiveness in a fashion that appears consistent with hierarchical encoding, where less attractive targets benefit more from being viewed in an increasingly unattractive group than attractive targets. Our final experiment demonstrates that the company of others not only alters our attractiveness, but also induces shifts in how average or distinctive a target face appears too, with these averageness effects associated with the friend effects observed in our first experiment. We present a Friend Effects Framework within which 'friend effects' is an umbrella term for the positive (e.g., cheerleader effects, group attractiveness effects) and negative (i.e., the reverse cheerleader effect) ways in which hierarchical encoding, group contrastive effects, and other influences of friends can have on your attractiveness.
Collapse
Affiliation(s)
| | - Weiying Yang
- Department of Psychology, Soochow University, Suzhou, China
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China.
| |
Collapse
|
13
|
Bunce C, Gray KLH, Cook R. The perception of interpersonal distance is distorted by the Müller-Lyer illusion. Sci Rep 2021; 11:494. [PMID: 33436801 PMCID: PMC7803751 DOI: 10.1038/s41598-020-80073-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/14/2020] [Indexed: 11/10/2022] Open
Abstract
There is growing interest in how human observers perceive social scenes containing multiple people. Interpersonal distance is a critical feature when appraising these scenes; proxemic cues are used by observers to infer whether two people are interacting, the nature of their relationship, and the valence of their current interaction. Presently, however, remarkably little is known about how interpersonal distance is encoded within the human visual system. Here we show that the perception of interpersonal distance is distorted by the Müller-Lyer illusion. Participants perceived the distance between two target points to be compressed or expanded depending on whether face pairs were positioned inside or outside the to-be-judged interval. This illusory bias was found to be unaffected by manipulations of face direction. These findings aid our understanding of how human observers perceive interpersonal distance and may inform theoretical accounts of the Müller-Lyer illusion.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK.
| |
Collapse
|
14
|
Vestner T, Gray KLH, Cook R. Visual search for facing and non-facing people: The effect of actor inversion. Cognition 2020; 208:104550. [PMID: 33360076 DOI: 10.1016/j.cognition.2020.104550] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/08/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
In recent years, there has been growing interest in how human observers perceive, attend to, and recall, social interactions viewed from third-person perspectives. One of the interesting findings to emerge from this new literature is the search advantage for facing dyads. When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Interestingly, the search advantage for facing dyads appears to be sensitive to the orientation of the people depicted. While front-to-front target pairs are found faster than back-to-back targets when target and distractor pairings are shown upright, front-to-front and back-to-back targets are found equally quickly when pairings are shown upside-down. In the present study, we sought to better understand why the search advantage for facing dyads is sensitive to the orientation of the people depicted. To begin, we show that the orientation sensitivity of the search advantage is seen with dyads constructed from faces only, and from bodies with the head and face occluded. We replicate these effects using two different visual search paradigms. We go on to show that individual faces and bodies, viewed in profile, produce strong attentional cueing effects when shown upright, but not when presented upside-down. Together with recent evidence that arrows arranged front-to-front also produce the search advantage for facing dyads, these findings support the view that the search advantage is a by-product of the ability of constituent elements to direct observers' visuo-spatial attention.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
15
|
Why are social interactions found quickly in visual search tasks? Cognition 2020; 200:104270. [PMID: 32220782 PMCID: PMC7315127 DOI: 10.1016/j.cognition.2020.104270] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 03/12/2020] [Accepted: 03/16/2020] [Indexed: 11/25/2022]
Abstract
When asked to find a target dyad amongst non-interacting individuals, participants respond faster when the individuals in the target dyad are shown face-to-face (suggestive of a social interaction), than when they are presented back-to-back. Face-to-face dyads may be found faster because social interactions recruit specialized processing. However, human faces and bodies are salient directional cues that exert a strong influence on how observers distribute their attention. Here we report that a similar search advantage exists for ‘point-to-point’ and ‘point-to-face’ target arrangements constructed using arrows – a non-social directional cue. These findings indicate that the search advantage seen for face-to-face dyads is a product of the directional cues present within arrangements, not the fact that they are processed as social interactions, per se. One possibility is that, when arranged in the face-to-face or point-to-point configuration, pairs of directional cues (faces, bodies, arrows) create an attentional ‘hot-spot’ – a region of space in between the elements to which attention is directed by multiple cues. Due to the presence of this hot-spot, observers' attention may be drawn to the target location earlier in a serial visual search.
Collapse
|
16
|
Walbrin J, Mihai I, Landsiedel J, Koldewyn K. Developmental changes in visual responses to social interactions. Dev Cogn Neurosci 2020; 42:100774. [PMID: 32452460 PMCID: PMC7075793 DOI: 10.1016/j.dcn.2020.100774] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 02/03/2020] [Accepted: 03/02/2020] [Indexed: 11/09/2022] Open
Abstract
Children show less interaction selectivity in the pSTS than adults. Adults show bilateral pSTS selectivity, while children are more right-lateralized. Exploratory findings suggest interaction selectivity in pSTS is more focally tuned in adults.
Recent evidence demonstrates that a region of the posterior superior temporal sulcus (pSTS) is selective to visually observed social interactions in adults. In contrast, little is known about neural responses to social interactions in children. Here, we used fMRI to ask whether the pSTS is ‘tuned’ to social interactions in children at all, and if so, how selectivity might differ from adults. This was investigated in the pSTS, along with several other socially-tuned regions in neighbouring temporal cortex: extrastriate body area, face selective STS, fusiform face area, and mentalizing selective temporo-parietal junction. Both children and adults showed selectivity to social interaction within right pSTS, while only adults showed selectivity on the left. Adults also showed both more focal and greater selectivity than children (6–12 years) bilaterally. Exploratory sub-group analyses showed that younger children (6–8), but not older children (9–12), are less selective than adults on the right, while there was a continuous developmental trend (adults > older > younger) in left pSTS. These results suggest that, over development, the neural response to social interactions is characterized by increasingly more selective, focal, and bilateral pSTS responses, a process that likely continues into adolescence.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, United Kingdom.
| | - Ioana Mihai
- School of Psychology, Bangor University, Wales, United Kingdom
| | | | - Kami Koldewyn
- School of Psychology, Bangor University, Wales, United Kingdom
| |
Collapse
|
17
|
Walbrin J, Koldewyn K. Dyadic interaction processing in the posterior temporal cortex. Neuroimage 2019; 198:296-302. [PMID: 31100434 PMCID: PMC6610332 DOI: 10.1016/j.neuroimage.2019.05.027] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/04/2019] [Accepted: 05/10/2019] [Indexed: 11/21/2022] Open
Abstract
Recent behavioural evidence shows that visual displays of two individuals interacting are not simply encoded as separate individuals, but as an interactive unit that is 'more than the sum of its parts'. Recent functional magnetic resonance imaging (fMRI) evidence shows the importance of the posterior superior temporal sulcus (pSTS) in processing human social interactions, and suggests that it may represent human-object interactions as qualitatively 'greater' than the average of their constituent parts. The current study aimed to investigate whether the pSTS or other posterior temporal lobe region(s): 1) Demonstrated evidence of a dyadic information effect - that is, qualitatively different responses to an interacting dyad than to averaged responses of the same two interactors, presented in isolation, and; 2) Significantly differentiated between different types of social interactions. Multivoxel pattern analysis was performed in which a classifier was trained to differentiate between qualitatively different types of dyadic interactions. Above-chance classification of interactions was observed in 'interaction selective' pSTS-I and extrastriate body area (EBA), but not in other regions of interest (i.e. face-selective STS and mentalizing-selective temporo-parietal junction). A dyadic information effect was not observed in the pSTS-I, but instead was shown in the EBA; that is, classification of dyadic interactions did not fully generalise to averaged responses to the isolated interactors, indicating that dyadic representations in the EBA contain unique information that cannot be recovered from the interactors presented in isolation. These findings complement previous observations for congruent grouping of human bodies and objects in the broader lateral occipital temporal cortex area. pSTS and EBA classify between different dynamic interactions. EBA is sensitive to (uniquely) dyadic interaction information. These findings support previous evidence for grouping of interacting people/objects in LOTC.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, UK.
| | | |
Collapse
|
18
|
The inherently contextualized nature of facial emotion perception. Curr Opin Psychol 2017; 17:47-54. [DOI: 10.1016/j.copsyc.2017.06.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Revised: 04/28/2017] [Accepted: 06/14/2017] [Indexed: 11/20/2022]
|
19
|
Brewer R, Biotti F, Bird G, Cook R. Typical integration of emotion cues from bodies and faces in Autism Spectrum Disorder. Cognition 2017; 165:82-87. [PMID: 28525805 DOI: 10.1016/j.cognition.2017.05.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 05/03/2017] [Accepted: 05/10/2017] [Indexed: 01/22/2023]
Abstract
Contextual cues derived from body postures bias how typical observers categorize facial emotion; the same facial expression may be perceived as anger or disgust when aligned with angry and disgusted body postures. Individuals with Autism Spectrum Disorder (ASD) are thought to have difficulties integrating information from disparate visual regions to form unitary percepts, and may be less susceptible to visual illusions induced by context. The current study investigated whether individuals with ASD exhibit diminished integration of emotion cues extracted from faces and bodies. Individuals with and without ASD completed a binary expression classification task, categorizing facial emotion as 'Disgust' or 'Anger'. Facial stimuli were drawn from a morph continuum blending facial disgust and anger, and presented in isolation, or accompanied by an angry or disgusted body posture. Participants were explicitly instructed to disregard the body context. Contextual modulation was inferred from a shift in the resulting psychometric functions.Contrary to prediction, observers with ASD showed typical integration of emotion cues from the face and body. Correlation analyses suggested a relationship between the ability to categorize emotion from isolated faces, and susceptibility to contextual influence within the ASD sample; individuals with imprecise facial emotion classification were influenced more by body posture cues.
Collapse
Affiliation(s)
- Rebecca Brewer
- Department of Psychology, Royal Holloway, University of London, UK.
| | | | - Geoffrey Bird
- Experimental Psychology Department, University of Oxford, UK; MRC Social, Genetic & Developmental Psychiatry Centre, King's College London, UK; Institute of Cognitive Neuroscience, University College London, UK
| | - Richard Cook
- Department of Psychology, City, University of London, UK
| |
Collapse
|