1
|
Samuel S, Erle TM, Kirsch LP, Surtees A, Apperly I, Bukowski H, Auvray M, Catmur C, Kessler K, Quesque F. Three key questions to move towards a theoretical framework of visuospatial perspective taking. Cognition 2024; 247:105787. [PMID: 38583320 DOI: 10.1016/j.cognition.2024.105787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/12/2024] [Accepted: 03/29/2024] [Indexed: 04/09/2024]
Abstract
What would a theory of visuospatial perspective taking (VSPT) look like? Here, ten researchers in the field, many with different theoretical viewpoints and empirical approaches, present their consensus on the three big questions we need to answer in order to bring this theory (or these theories) closer.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, School of Health and Psychological Sciences, City, University of London, U.K.
| | - Thorsten M Erle
- Department of Social Psychology, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, the Netherlands
| | - Louise P Kirsch
- Université Paris Cité, INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Andrew Surtees
- Centre for Developmental Science, School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK; Birmingham Women's and Children's NHS Foundation Trust, Steelhouse Lane, Birmingham, UK
| | - Ian Apperly
- Centre for Developmental Science, School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK
| | - Henryk Bukowski
- Institute of Psychological Sciences, Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Malika Auvray
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France
| | - Caroline Catmur
- Department of Psychology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK
| | - Klaus Kessler
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Francois Quesque
- Centre de Recherche en Neurosciences de Lyon CRNL, U1028, UMR5292, Trajectoires, F-69500 Bron, France; Centre Ressource de Réhabilitation Psychosociale, CH Le Vinatier, Lyon, France
| |
Collapse
|
2
|
Fogd D, Sebanz N, Kovács ÁM. Flexible social monitoring as revealed by eye movements: Spontaneous mental state updating triggered by others' unexpected actions. Cognition 2024; 249:105812. [PMID: 38763072 DOI: 10.1016/j.cognition.2024.105812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 05/21/2024]
Abstract
Successful interactions require not only representing others' mental states but also flexibly updating them, whenever one's original inferences may no longer hold. Such situations arise, for instance, when a partner's behavior is incongruent with one's expectations. Although these situations are rather common, the question whether people update others' mental states spontaneously upon encountering unexpected behaviors and whether they use the updated mental states in novel contexts, has been largely unexplored. We addressed these issues in two experiments. In each experiment participants first performed an anticipatory looking task, reacting to a virtual 'partner', who categorized pictures based on their ambiguous or non-ambiguous color. Importantly, to perform the task participants did not have to track their partner's perspective. Following a correct categorization phase, the 'partner' started to systematically miscategorize one of the ambiguous colors (e.g., as if she would now believe that the greenish blue is green). We measured how participants' anticipatory looking preceding the partner's categorization changed across trials. Afterward, we asked whether participants implicitly transferred their knowledge about the partner's updated perspective to a new task. Finally, they performed an explicit perspective-taking task, to test whether they selectively updated the partner's perspective, but not their own. Results revealed that correct anticipations started to emerge only after a few miscategorizations, indicating the spontaneous updating of the other's perspective regarding the miscategorized color. Signatures of updating emerged somewhat earlier when the partner made similarity judgments (Experiment 2), highlighting the subjective nature of her decisions, compared to when following an explicit color-categorization rule (Experiment 1). In the explicit perspective-taking task of both experiments, roughly half of the participants could categorize items according to the partner's (spontaneously updated) perspective and also used their partner's updated perspective in the implicit transfer task to some degree, while they were the ones who displayed more pronounced anticipatory patterns as well. Such data provides strong evidence that the observed changes in anticipatory looking reflect spontaneous and flexible mental state updating. In addition, the findings also point to a high individual variability both in the updating of attributed mental states and the use of the updated mental state content.
Collapse
Affiliation(s)
- Dóra Fogd
- Department of Cognitive Science, Central European University, Vienna, Austria.
| | - Natalie Sebanz
- Department of Cognitive Science, Central European University, Vienna, Austria
| | | |
Collapse
|
3
|
Deroy O, Longin L, Bahrami B. Co-perceiving: Bringing the social into perception. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024:e1681. [PMID: 38706396 DOI: 10.1002/wcs.1681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 04/15/2024] [Accepted: 04/16/2024] [Indexed: 05/07/2024]
Abstract
Humans and other animals possess the remarkable ability to effectively navigate a shared perceptual environment by discerning which objects and spaces are perceived by others and which remain private to themselves. Traditionally, this capacity has been encapsulated under the umbrella of joint attention or joint action. In this comprehensive review, we advocate for a broader and more mechanistic understanding of this phenomenon, termed co-perception. Co-perception encompasses the sensitivity to the perceptual engagement of others and the capability to differentiate between objects perceived privately and those perceived commonly with others. It represents a distinct concept from mere simultaneous individual perception. Moreover, discerning between private and common objects doesn't necessitate intricate mind-reading abilities or mutual coordination. The act of perceiving objects as either private or common provides a comprehensive account for social scenarios where individuals simply share the same context or may even engage in competition. This conceptual framework encourages a re-examination of classical paradigms that demonstrate social influences on perception. Furthermore, it suggests that the impacts of shared experiences extend beyond affective responses, also influencing perceptual processes. This article is categorized under: Psychology > Attention Philosophy > Foundations of Cognitive Science Philosophy > Psychological Capacities.
Collapse
Affiliation(s)
- Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, Ludwig Maximilian University, Munich, Germany
- Munich Centre for Neurosciences-Brain & Mind, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, Ludwig Maximilian University, Munich, Germany
| | - Bahador Bahrami
- Crowd Cognition Group, Faculty of General Psychology and Education, Ludwig Maxilian University, Munich, Germany
| |
Collapse
|
4
|
Woo BM, Chisholm GH, Spelke ES. Do toddlers reason about other people's experiences of objects? A limit to early mental state reasoning. Cognition 2024; 246:105760. [PMID: 38447359 DOI: 10.1016/j.cognition.2024.105760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 01/09/2024] [Accepted: 02/24/2024] [Indexed: 03/08/2024]
Abstract
Human social life requires an understanding of the mental states of one's social partners. Two people who look at the same objects often experience them differently, as a twinkling light or a planet, a 6 or a 9, and a random cat or Cleo, their pet. Indeed, a primary purpose of communication is to share distinctive experiences of objects or events. Here, we test whether toddlers (14-15 months) are sensitive to another agent's distinctive experiences of pictures when determining the goal underlying the agent's actions in a minimally social context. We conducted nine experiments. Across seven of these experiments (n = 206), toddlers viewed either videotaped or live events in which an actor, whose perspective differed from their own, reached (i) for pictures of human faces that were upright or inverted or (ii) for pictures that depicted a rabbit or a duck at different orientations. Then either the actor or the toddler moved to a new location that aligned their perspectives, and the actor alternately reached to each of the two pictures. By comparing toddlers' looking to the latter reaches, we tested whether their goal attributions accorded with the actor's experience of the pictured objects, with their own experience of the pictured objects, or with no consistency. In no experiment did toddlers encode the actor's goal in accord with his experiences of the pictures. In contrast, in a similar experiment that manipulated the visibility of a picture rather than the experience that it elicited, toddlers (n = 32) correctly expected the actor's action to depend on what was visible and occluded to him, rather than to themselves. In a verbal version of the tasks, older children (n = 35) correctly inferred the actor's goal in both cases. These findings provide further evidence for a dissociation between two kinds of mental state reasoning: When toddlers view an actor's object-directed action under minimally social conditions, they take account of the actor's visual access to the object but not the actor's distinctive experience of the object.
Collapse
Affiliation(s)
- Brandon M Woo
- Department of Psychology, Harvard University, Cambridge, MA 02138, United States; The Center for Brains, Minds, and Machines, Cambridge, MA 02139, United States.
| | - Gabriel H Chisholm
- Department of Psychology, Harvard University, Cambridge, MA 02138, United States; The Center for Brains, Minds, and Machines, Cambridge, MA 02139, United States
| | - Elizabeth S Spelke
- Department of Psychology, Harvard University, Cambridge, MA 02138, United States; The Center for Brains, Minds, and Machines, Cambridge, MA 02139, United States
| |
Collapse
|
5
|
Zhou S, Sun Y, Zhao Y, Jiang T, Yang H, Li S. I prefer what you can see: The role of visual perspective-taking on the gaze-liking effect. Heliyon 2024; 10:e29615. [PMID: 38681601 PMCID: PMC11046107 DOI: 10.1016/j.heliyon.2024.e29615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 03/30/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Individuals' gaze on an object usually leads others to prefer that object, which is called the gaze-liking effect. However, it is still unclear whether this effect is driven by social factors (i.e., visual perspective-taking) or the domain-general processing (i.e., attention cueing). This research explored the mechanism of the gaze-liking effect by manipulating the objects' visibility to an avatar in six online one-shot experiments. The results showed that participants' affective evaluation for the object was modulated by the avatar's visual perspective. Specifically, the visible object to the avatar received a higher rating of liking degree. However, when the avatar was replaced with a non-social stimulus, the experimental effect was absent. Furthermore, the gaze-liking effect was robust while controlling for confounding factors such as the distance between the object and the avatar or type of stimuli. These findings provided convincing evidence that the gaze-liking effect involves a process of the other's visual experience and is not merely a by-product of the gaze-cueing effect.
Collapse
Affiliation(s)
- Song Zhou
- School of Psychology, Fujian Normal University, Fuzhou, China
| | | | - Yan Zhao
- School of Psychology, Fujian Normal University, Fuzhou, China
| | - Tao Jiang
- Research Center for Regional and National Comparative Diplomacy, China Foreign Affairs University, Beijing, China
| | - Huaqi Yang
- School of Psychology, Fujian Normal University, Fuzhou, China
| | - Sha Li
- School of Psychology, Fujian Normal University, Fuzhou, China
| |
Collapse
|
6
|
Guo G, Wang N, Sun C, Geng H. Embodied Cross-Modal Interactions Based on an Altercentric Reference Frame. Brain Sci 2024; 14:314. [PMID: 38671966 PMCID: PMC11048532 DOI: 10.3390/brainsci14040314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 03/20/2024] [Accepted: 03/22/2024] [Indexed: 04/28/2024] Open
Abstract
Accurate comprehension of others' thoughts and intentions is crucial for smooth social interactions, wherein understanding their perceptual experiences serves as a fundamental basis for this high-level social cognition. However, previous research has predominantly focused on the visual modality when investigating perceptual processing from others' perspectives, leaving the exploration of multisensory inputs during this process largely unexplored. By incorporating auditory stimuli into visual perspective-taking (VPT) tasks, we have designed a novel experimental paradigm in which the spatial correspondence between visual and auditory stimuli was limited to the altercentric rather than the egocentric reference frame. Overall, we found that when individuals engaged in explicit or implicit VPT to process visual stimuli from an avatar's viewpoint, the concomitantly presented auditory stimuli were also processed within this avatar-centered reference frame, revealing altercentric cross-modal interactions.
Collapse
Affiliation(s)
- Guanchen Guo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| | - Nanbo Wang
- Department of Psychology, School of Health, Fujian Medical University, Fuzhou 350122, China;
| | - Chu Sun
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| | - Haiyan Geng
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| |
Collapse
|
7
|
Kaup B, Ulrich R, Bausenhart KM, Bryce D, Butz MV, Dignath D, Dudschig C, Franz VH, Friedrich C, Gawrilow C, Heller J, Huff M, Hütter M, Janczyk M, Leuthold H, Mallot H, Nürk HC, Ramscar M, Said N, Svaldi J, Wong HY. Modal and amodal cognition: an overarching principle in various domains of psychology. PSYCHOLOGICAL RESEARCH 2024; 88:307-337. [PMID: 37847268 PMCID: PMC10857976 DOI: 10.1007/s00426-023-01878-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 09/17/2023] [Indexed: 10/18/2023]
Abstract
Accounting for how the human mind represents the internal and external world is a crucial feature of many theories of human cognition. Central to this question is the distinction between modal as opposed to amodal representational formats. It has often been assumed that one but not both of these two types of representations underlie processing in specific domains of cognition (e.g., perception, mental imagery, and language). However, in this paper, we suggest that both formats play a major role in most cognitive domains. We believe that a comprehensive theory of cognition requires a solid understanding of these representational formats and their functional roles within and across different domains of cognition, the developmental trajectory of these representational formats, and their role in dysfunctional behavior. Here we sketch such an overarching perspective that brings together research from diverse subdisciplines of psychology on modal and amodal representational formats so as to unravel their functional principles and their interactions.
Collapse
Affiliation(s)
- Barbara Kaup
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany.
| | - Rolf Ulrich
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany.
| | - Karin M Bausenhart
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Donna Bryce
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
- Department of Psychology, University of Augsburg, Augsburg, Germany
| | - Martin V Butz
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Sand 14, 72076, Tübingen, Germany
| | - David Dignath
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Carolin Dudschig
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Volker H Franz
- Department of Computer Science, University of Tübingen, Sand 14, 72076, Tübingen, Germany
| | - Claudia Friedrich
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Caterina Gawrilow
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Jürgen Heller
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Markus Huff
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
- Leibniz-Institut für Wissensmedien, Tübingen, Germany
| | - Mandy Hütter
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Markus Janczyk
- Department of Psychology, University of Bremen, Bremen, Germany
| | - Hartmut Leuthold
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Hanspeter Mallot
- Department of Biology, University of Tübingen, Auf der Morgenstelle 28, 72076, Tübingen, Germany
| | - Hans-Christoph Nürk
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Michael Ramscar
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Nadia Said
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Jennifer Svaldi
- Department of Psychology, Fachbereich Psychologie, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
- German Center for Mental Health (DZPG), partner site, Tübingen, Germany
| | - Hong Yu Wong
- Department of Philosophy, University of Tübingen, Tübingen, Germany
| |
Collapse
|
8
|
Lukošiūnaitė I, Kovács ÁM, Sebanz N. The influence of another's actions and presence on perspective taking. Sci Rep 2024; 14:4971. [PMID: 38424102 PMCID: PMC10904779 DOI: 10.1038/s41598-024-55200-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 02/21/2024] [Indexed: 03/02/2024] Open
Abstract
The ability to take each other's visuospatial perspective has been linked to people's capacity to perceive another's action possibilities and to predict their actions. Research has also shown that visuospatial perspective taking is supported by one's own mental own-body transformation. However, how these two processes of action perception and visuospatial perspective taking might interact remains largely unknown. By introducing seven angular disparities between participants and the model in the stimuli pictures across "Action" and "No Action" conditions, we investigated whether the observation of a goal-directed action facilitates perspective taking and whether this facilitation depends on the level of mental own-body transformation required to take perspective. The results showed that action observation facilitated performance independently of the level of mental-own body transformation. The processes behind this facilitation could involve anatomical mapping that is independent of the congruency between the participants' and the model's perspectives. Further, we replicated previous research findings, showing that participants were more accurate and faster when taking the perspective of a person compared to an inanimate object (a chair). The strongest facilitation effects were seen at the highest angular disparities between participants and the model in the stimuli pictures. Together, these findings enhance our knowledge of the mechanisms behind visuospatial perspective taking.
Collapse
Affiliation(s)
- Ieva Lukošiūnaitė
- Department of Cognitive Science, Central European University, Vienna, Austria.
- School of Psychology, University of East Anglia, Norwich, United Kingdom.
| | - Ágnes M Kovács
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Natalie Sebanz
- Department of Cognitive Science, Central European University, Vienna, Austria
| |
Collapse
|
9
|
Rothmaler K, Grosse Wiesmann C. Evidence against implicit belief processing in a blindfold task. PLoS One 2023; 18:e0294136. [PMID: 37956182 PMCID: PMC10642834 DOI: 10.1371/journal.pone.0294136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Understanding what other people think is crucial to our everyday interactions. We seem to be affected by the perspective of others even in situations where it is irrelevant to us. This intrusion from others' perspectives has been referred to as altercentric bias and has been suggested to reflect implicit belief processing. There is an ongoing debate about how robust such altercentric effects are and whether they indeed reflect true mentalizing or result from simpler, domain-general processes. As a critical test for true mentalizing, the blindfold manipulation has been proposed. That is, participants are familiarized with a blindfold that is either transparent or opaque. When they then observe a person wearing this blindfold, they can only infer what this person can or cannot see based on their knowledge of the blindfold's transparency. Here, we used this blindfold manipulation to test whether participants' reaction times in detecting an object depended on the agent's belief about the object's location, itself manipulated with a blindfold. As a second task, we asked participants to detect where the agent was going to look for the object. Across two experiments with a large participant pool (N = 234) and different settings (online/lab), we found evidence against altercentric biases in participants' response times in detecting the object. We did, however, replicate a well-documented reality congruency effect. When asked to detect the agent's action, in turn, participants were biased by their own knowledge of where the object should be, in line with egocentric biases previously found in false belief reasoning. These findings suggests that altercentric biases do not reflect belief processing but lower-level processes, or alternatively, that implicit belief processing does not occur when the belief needs to be inferred from one's own experience.
Collapse
Affiliation(s)
- Katrin Rothmaler
- Minerva Fast Track Research Group Milestones of Early Cognitive Development, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
- Humboldt Research Group, Faculty of Education, Leipzig University, Leipzig, Saxony, Germany
| | - Charlotte Grosse Wiesmann
- Minerva Fast Track Research Group Milestones of Early Cognitive Development, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| |
Collapse
|
10
|
Ford B, Monk R, Litchfield D, Qureshi A. Manipulating avatar age and gender in level-2 visual perspective taking. Psychon Bull Rev 2023; 30:1431-1441. [PMID: 36781684 PMCID: PMC10482764 DOI: 10.3758/s13423-023-02249-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
Abstract
Visual perspective taking (VPT) represents how the world appears from another person's position. The age, group status and emotional displays of the other person have been shown to affect task performance, but tasks often confound social and spatial outcome measures by embedding perspective taking in explicitly social contexts or theory-of-mind reasoning. Furthermore, while previous research has suggested that visual perspective taking may be impacted by avatar characteristics, it is unknown whether this is driven by general group processing or a specific deficit in mentalizing about outgroups, for example, children. Therefore, using a minimally social task (i.e., the task was not communicative, and acknowledging the "mind" of the avatar was not necessitated), we examined whether avatar age and avatar gender affect performance on simpler (low angular disparity) and more effortful, embodied (high angular disparity) perspective judgments. Ninety-two participants represented the visuospatial perspectives of a boy, girl, man, or woman who were presented at various angular disparities. A target object was placed in front of the avatar and participants responded to the orientation of the object from the avatar's position. The findings suggest that social features of visuospatial perspective taking (VSPT) are processed separately from the fundamental spatial computations. Further, Level-2 VSPT appears to be affected by general group categorization (e.g., age and gender) rather than a deficit in mentalizing about a specific outgroup (e.g., children).
Collapse
Affiliation(s)
- B Ford
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire, UK.
| | - R Monk
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire, UK
| | - D Litchfield
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire, UK
| | - A Qureshi
- Department of Psychology, Edge Hill University, Ormskirk, Lancashire, UK
| |
Collapse
|
11
|
Samuel S, Cole GG, Eacott MJ. It's Not You, It's Me: A Review of Individual Differences in Visuospatial Perspective Taking. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:293-308. [PMID: 35994772 PMCID: PMC10018059 DOI: 10.1177/17456916221094545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Visuospatial perspective taking (VSPT) concerns the ability to understand something about the visual relationship between an agent or observation point on the one hand and a target or scene on the other. Despite its importance to a wide variety of other abilities, from communication to navigation, and decades of research, there is as yet no theory of VSPT. Indeed, the heterogeneity of results from different (and sometimes the same) VSPT tasks point to a complex picture suggestive of multiple VSPT strategies, individual differences in performance, and context-specific factors that together have a bearing on both the efficiency and accuracy of outcomes. In this article, we review the evidence in search of patterns in the data. We found a number of predictors of VSPT performance but also a number of gaps in understanding that suggest useful pathways for future research and, possibly, a theory (or theories) of VSPT. Overall, this review makes the case for understanding VSPT by better understanding the perspective taker rather than the target agents or their perception.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, University of
Plymouth
- Department of Psychology, University of
Essex
- Steven Samuel, Department of Psychology,
University of Plymouth
| | | | | |
Collapse
|
12
|
Yuan M, Jiang R, Li X, Wu W. Seeing it both ways: examining the role of inhibitory control in level-2 visual perspective-taking. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03519-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
13
|
Cole GG, Samuel S, Eacott MJ. A return of mental imagery: The pictorial theory of visual perspective-taking. Conscious Cogn 2022; 102:103352. [DOI: 10.1016/j.concog.2022.103352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 04/29/2022] [Accepted: 05/03/2022] [Indexed: 12/01/2022]
|
14
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
15
|
Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition 2022; 224:105076. [PMID: 35364401 DOI: 10.1016/j.cognition.2022.105076] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 02/25/2022] [Accepted: 02/28/2022] [Indexed: 11/20/2022]
Abstract
As robots rapidly enter society, how does human social cognition respond to their novel presence? Focusing on one foundational social-cognitive capacity-visual perspective taking-seven studies reveal that people spontaneously adopt a robot's unique perspective and do so with patterns of variation that mirror perspective taking toward humans. As they do with humans, people take a robot's visual perspective when it displays goal-directed actions. Moreover, perspective taking is absent when the agent lacks human appearance, increases when the agent looks highly humanlike, and persists even when the humanlike agent is perceived as eerie or as obviously lacking a mind. These results suggest that visual perspective taking toward robots is consistent with a "mere appearance hypothesis"-a form of stimulus generalization based on humanlike appearance-rather than following an "uncanny valley" pattern or arising from mind perception. Robots' superficial human resemblance may trigger and modulate social-cognitive responses in human observers originally developed for human interaction.
Collapse
|
16
|
Müsseler J, von Salm-Hoogstraeten S, Böffel C. Perspective Taking and Avatar-Self Merging. Front Psychol 2022; 13:714464. [PMID: 35369185 PMCID: PMC8971368 DOI: 10.3389/fpsyg.2022.714464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 02/18/2022] [Indexed: 11/24/2022] Open
Abstract
Today, avatars often represent users in digital worlds such as in video games or workplace applications. Avatars embody the user and perform their actions in these artificial environments. As a result, users sometimes develop the feeling that their self merges with their avatar. The user realizes that they are the avatar, but the avatar is also the user—meaning that avatar’s appearance, character, and actions also affect their self. In the present paper, we first introduce the event-coding approach of the self and then argue based on the reviewed literature on human-avatar interaction that a self-controlled avatar can lead to avatar-self merging: the user sets their own goals in the virtual environment, plans and executes the avatar’s actions, and compares the predicted with the actual motion outcomes of the avatar. This makes the user feel body ownership and agency over the avatar’s action. Following the event-coding account, avatar-self merging should not be seen as an all-or-nothing process, but rather as a continuous process to which various factors contribute, including successfully taking the perspective of the avatar. Against this background, we discuss affective, cognitive, and visuo-spatial perspective taking of the avatar. As evidence for avatar-self merging, we present findings showing that when users take the avatar’s perspective, they can show spontaneous behavioral tendencies that run counter to their own.
Collapse
|
17
|
Vestner T, Balsys E, Over H, Cook R. The self-consistency effect seen on the Dot Perspective Task is a product of domain-general attention cueing, not automatic perspective taking. Cognition 2022; 224:105056. [PMID: 35149309 DOI: 10.1016/j.cognition.2022.105056] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/29/2022] [Accepted: 02/01/2022] [Indexed: 11/19/2022]
Abstract
It has been proposed that humans automatically compute the visual perspective of others. Evidence for this view comes from the Dot Perspective Task. In this task, participants view a room in which a human actor is depicted, looking either leftwards or rightwards. Dots can appear on either the left wall of the room, the right wall, or both. At the start of each trial, participants are shown a number. Their speeded task is to decide whether the number of dots visible matches the number shown. On consistent trials the participant and the actor can see the same number of dots. On inconsistent trials, the participant and the actor can see a different number of dots. Participants respond faster on consistent trials than on inconsistent trials. This self-consistency effect is cited as evidence that participants compute the visual perspective of others automatically, even when it impedes their task performance. According to a rival interpretation, however, this effect is a product of attention cueing: slower responding on inconsistent trials simply reflects the fact that participants' attention is directed away from some or all of the to-be-counted dots. The present study sought to test these rival accounts. We find that desk fans, a class of inanimate object known to cue attention, also produce the self-consistency effect. Moreover, people who are more susceptible to the effect induced by fans tend to be more susceptible to the effect induced by human actors. These findings suggest that the self-consistency effect is a product of attention cueing.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Elizabeth Balsys
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
18
|
Ward E, Ganis G, McDonough KL, Bach P. EXPRESS: Is Implicit Level-2 Visual perspective taking embodied? Spontaneous perceptual simulation of others' perspectives is not impaired by motor restriction. Q J Exp Psychol (Hove) 2022; 75:1244-1258. [PMID: 35040382 PMCID: PMC9131407 DOI: 10.1177/17470218221077102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Visual perspective taking may rely on the ability to mentally rotate one's own body into that of another. Here we test whether participants' ability to make active body movements plays a causal role in visual perspective taking. We utilized our recent task that measures whether participants spontaneously represent another's visual perspective in a (quasi-)perceptual format that can drive own perceptual decision making. Participants reported whether alphanumeric characters, presented in different orientations, are shown in their normal or mirror-inverted form (e.g., "R" vs. "Я"). Between trials, we manipulated whether another person was sitting either left or right of the character and whether participants' movement was restricted with a chin rest or they could move freely. As in our previous research, participants spontaneously took the visual perspective of the other person, recognizing rotated letters more rapidly when they appeared upright to the other person in the scene, compared to when they faced away from that person, and these effects increased with age but were (weakly) negatively related to Schizotypy and not to autistic traits or social skills. Restricting participants' ability to make active body movements did not influence these effects. The results therefore rule out that active physical movement plays a causal role in computing another's visual perspective, either to create alignment between own and other's perspective or to trigger perspective-taking processes. The postural adjustments people sometimes make when making judgements from another's perspective may instead be a bodily consequence of mentally transforming one's actual to an imagined position in space.
Collapse
Affiliation(s)
- Eleanor Ward
- School of Psychology, University of Plymouth, Drake Circus, Devon, UK 6633
| | - Giorgio Ganis
- School of Psychology, University of Plymouth, Drake Circus, Devon, UK 6633
| | - Katrina L McDonough
- School of Psychology, University of Plymouth, Drake Circus, Devon, UK 6106.,University of Aberdeen, William Guild Building, Kings College, Old Aberdeen, Aberdeen, United Kingdom
| | - Patric Bach
- School of Psychology, University of Plymouth, Drake Circus, Devon, UK 1019.,University of Aberdeen, William Guild Building, Kings College, Old Aberdeen, Aberdeen, United Kingdom
| |
Collapse
|
19
|
Flavell JC, Over H, Vestner T, Cook R, Tipper SP. Rapid detection of social interactions is the result of domain general attentional processes. PLoS One 2022; 17:e0258832. [PMID: 35030168 PMCID: PMC8759659 DOI: 10.1371/journal.pone.0258832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/06/2021] [Indexed: 11/19/2022] Open
Abstract
Using visual search displays of interacting and non-interacting pairs, it has been demonstrated that detection of social interactions is facilitated. For example, two people facing each other are found faster than two people with their backs turned: an effect that may reflect social binding. However, recent work has shown the same effects with non-social arrow stimuli, where towards facing arrows are detected faster than away facing arrows. This latter work suggests a primary mechanism is an attention orienting process driven by basic low-level direction cues. However, evidence for lower level attentional processes does not preclude a potential additional role of higher-level social processes. Therefore, in this series of experiments we test this idea further by directly comparing basic visual features that orient attention with representations of socially interacting individuals. Results confirm the potency of orienting of attention via low-level visual features in the detection of interacting objects. In contrast, there is little evidence for the representation of social interactions influencing initial search performance.
Collapse
Affiliation(s)
- Jonathan C. Flavell
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Harriet Over
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Tim Vestner
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Richard Cook
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Steven P. Tipper
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| |
Collapse
|
20
|
Ueda S, Nagamachi K, Nakamura J, Sugimoto M, Inami M, Kitazaki M. The effects of body direction and posture on taking the perspective of a humanoid avatar in a virtual environment. PLoS One 2021; 16:e0261063. [PMID: 34932598 PMCID: PMC8691602 DOI: 10.1371/journal.pone.0261063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 11/23/2021] [Indexed: 12/03/2022] Open
Abstract
Visual perspective taking is inferring how the world looks to another person. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment, and which factor of the avatar is effective for the facilitation effect. We used a task that involved reporting how an object looks by a simple direction judgment, either from the avatar’s position or from the position of an empty chair. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards than when there was only an empty chair facing forwards. This suggests that the avatar does not simply attract spatial attention, but the posture of the avatar is crucial for the facilitation effect. In addition, when the directions of the head and the torso were opposite (i.e., an impossible posture), the avatar’s facilitation effect disappeared. Thus, visual perspective taking might not be facilitated by the avatar when its posture is biomechanically impossible because we cannot embody it. Finally, even when the avatar’s head of the possible posture was covered with a bucket, the facilitation effect was found with the forward-facing avatar rather than the backward-facing avatar. That is, the head/gaze direction cue, or presumably the belief that the visual stimulus to be judged can be seen by the avatar, was not required. These results suggest that explicit perspective taking is facilitated by embodiment towards humanoid avatars.
Collapse
Affiliation(s)
- Sachiyo Ueda
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
- * E-mail:
| | - Kazuya Nagamachi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | - Junya Nakamura
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | - Maki Sugimoto
- Department of Information and Computer Science, Keio University, Yokohama, Kanagawa, Japan
| | - Masahiko Inami
- Research Center for Advanced Science and Technology, The University of Tokyo, Meguro-ku, Tokyo, Japan
| | - Michiteru Kitazaki
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| |
Collapse
|
21
|
Sacheli LM, Arcangeli E, Carioti D, Butterfill S, Berlingeri M. Taking apart what brings us together: The role of action prediction, perspective-taking, and theory of mind in joint action. Q J Exp Psychol (Hove) 2021; 75:1228-1243. [PMID: 34609238 DOI: 10.1177/17470218211050198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The ability to act together with others to achieve common goals is crucial in life, yet there is no full consensus on the underlying cognitive skills. While influential theoretical accounts suggest that interaction requires sophisticated insights into others' minds, alternative views propose that high-level social skills might not be necessary because interactions are grounded on sensorimotor predictive mechanisms. At present, empirical evidence is insufficient to decide between the two. This study addressed this issue and explored the association between performance at joint action tasks and cognitive abilities in three domains-action prediction, perspective-taking, and theory of mind-in healthy adults (N = 58). We found that, while perspective-taking played a role in reading the behaviour of others independently of the social context, action prediction abilities specifically influenced the agents' performance in an interactive task but not in a control (social but non-interactive) task. In our study, performance at a theory of mind test did not play any role, as confirmed by Bayesian analyses. The results suggest that, in adults, sensorimotor predictive mechanisms might play a significant and specific role in supporting interpersonal coordination during motor interactions. We discuss the implications of our findings for the contrasting theoretical views described earlier and propose a way they might be partly reconciled.
Collapse
Affiliation(s)
- Lucia Maria Sacheli
- Department of Psychology and Milan Center for Neuroscience (NeuroMi), University of Milano-Bicocca, Milan, Italy
| | - Elisa Arcangeli
- Department of Humanistic Studies (DISTUM), University of Urbino Carlo Bo, Urbino, Italy
| | - Desiré Carioti
- Department of Humanistic Studies (DISTUM), University of Urbino Carlo Bo, Urbino, Italy
| | | | - Manuela Berlingeri
- Department of Humanistic Studies (DISTUM), University of Urbino Carlo Bo, Urbino, Italy.,Center of Developmental Neuropsychology, ASUR Marche, Pesaro, Italy
| |
Collapse
|
22
|
Samuel S, Hagspiel K, Cole GG, Eacott MJ. 'Seeing' proximal representations: Testing attitudes to the relationship between vision and images. PLoS One 2021; 16:e0256658. [PMID: 34415982 PMCID: PMC8378678 DOI: 10.1371/journal.pone.0256658] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 08/11/2021] [Indexed: 11/30/2022] Open
Abstract
Corrections applied by the visual system, like size constancy, provide us with a coherent and stable perspective from ever-changing retinal images. In the present experiment we investigated how willing adults are to examine their own vision as if it were an uncorrected 2D image, much like a photograph. We showed adult participants two lines on a wall, both of which were the same length but one was closer to the participant and hence appeared visually longer. Despite the instruction to base their judgements on appearance specifically, approximately half of the participants judged the lines to appear the same. When they took a photo of the lines and were asked how long they appeared in the image their responses shifted; now the closer line appeared longer. However, when they were asked again about their own view they reverted to their original response. These results suggest that many adults are resistant to imagining their own vision as if it were a flat image. We also place these results within the context of recent views on visual perspective-taking.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, University of Essex, Colchester, United Kingdom
- Department of Psychology, University of Plymouth, Plymouth, United Kingdom
- * E-mail:
| | - Klara Hagspiel
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Geoff G. Cole
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Madeline J. Eacott
- Department of Psychology, University of Essex, Colchester, United Kingdom
| |
Collapse
|
23
|
Zhai J, Xie J, Chen J, Huang Y, Ma Y, Huang Y. The presence of other-race people disrupts spontaneous level-2 visual perspective taking. Scand J Psychol 2021; 62:655-664. [PMID: 34191306 DOI: 10.1111/sjop.12751] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 04/26/2021] [Indexed: 11/28/2022]
Abstract
Visual perspective taking is an essential skill for effective social interaction. Previous studies have tested various perceiver-based factors that affect intentional perspective taking; however, the factors affecting spontaneous perspective taking remain unknown. To fill this gap, the present study used a novel spontaneous visual perspective taking paradigm to explore how an agent's race and emotion affect spontaneous level-2 visual perspective taking. In Experiment 1, the participants completed a mental rotation task while a human agent simultaneously gazed at the target with positive, negative, or neutral facial expressions. The agent was African, Caucasian, or Chinese. The results revealed that the other-race agents disrupted the participants' spontaneous level-2 visual perspective taking, while emotion weakly affected it. Experiment 2 retested whether emotion could affect spontaneous level-2 visual perspective taking while only own-race agents were used. The participants completed the same task as that in Experiment 1. The results revealed that emotions weakly affected spontaneous level-2 visual perspective taking. In summary, the present study first examined what target-based factors affect spontaneous level-2 visual perspective taking. The results extend the representation and incorporation of the close others' responses (RICOR) model. Specifically, people routinely construct representations of other people's points of view when they share the same racial group.
Collapse
Affiliation(s)
- Jing Zhai
- School of Psychology, Nanjing Normal University, Nanjing, China
| | - Jiushu Xie
- School of Psychology, Nanjing Normal University, Nanjing, China
| | - Jiahan Chen
- School of Psychology, Nanjing Normal University, Nanjing, China
| | - Yujie Huang
- School of Psychology, Nanjing Normal University, Nanjing, China
| | - Yuchao Ma
- School of Psychology, Nanjing Normal University, Nanjing, China
| | - Yanli Huang
- School of Psychology, Nanjing Normal University, Nanjing, China
| |
Collapse
|
24
|
Samuel S, Hagspiel K, Eacott MJ, Cole GG. Visual perspective-taking and image-like representations: We don't see it. Cognition 2021; 210:104607. [PMID: 33508578 DOI: 10.1016/j.cognition.2021.104607] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 01/15/2021] [Accepted: 01/15/2021] [Indexed: 10/22/2022]
Abstract
The ability to represent another agent's visual perspective has recently been attributed to a process called "perceptual simulation", whereby we generate an image-like or "quasi-perceptual" representation of another agent's vision. In an extensive series of experiments we tested this notion. Adult observers were presented with pictures of an agent looking at two horizontal lines, one of which was closer to the agent and hence appeared longer from his/her visual perspective. In each case approximately as many participants judged the closer line to appear shorter as longer (to the agent), i.e., failures to take the agent's perspective. This occurred when clear depth cues were added to emphasise the agent's location relative to the stimuli, when the agent was moved closer to the lines, when the lines were oriented vertically, when judgments could be made while viewing the image, and when participants imagined themselves in the agent's place. It also persisted when we asked participants to imagine what a photo taken from the same location as the agent would show, ruling out a misinterpretation of the instructions. Overall, our data suggest that adults attempt to solve visual perspective-taking problems by drawing upon naïve and often erroneous ideas about how vision works.
Collapse
Affiliation(s)
| | | | | | - Geoff G Cole
- Department of Psychology, University of Essex, UK
| |
Collapse
|
25
|
Tracking multiple perspectives: Spontaneous computation of what individuals in high entitative groups see. Psychon Bull Rev 2021; 28:879-887. [PMID: 33469850 DOI: 10.3758/s13423-020-01857-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2020] [Indexed: 11/08/2022]
Abstract
Perspective-taking ability is crucial for supporting social interactions. It has been widely suggested that the calculation of an individual's perspective is spontaneous. Nevertheless, people typically engage with more than one individual, and computing what individuals in a crowd see is important. The current study explored whether people spontaneously compute the perspectives of individuals displayed in a crowd. The classic visual perspective-taking task was adopted, but the picture of the room was presented with four human avatars facing two walls. The results showed that if the crowd of individuals was treated as a high entitative group, when none of the perspectives of the individuals contained the same number of discs as that from the perspective of the participant, the judgment of the participant's perspective was slower than when a proportion of the perspectives of the individuals displayed in the crowd were consistent with the participant's perspective, even if the perspectives of the multiple individuals in a crowd were not explicitly noticed. This altercentric intrusion effect was not present when the crowd had low entitativity. These findings were replicated by using different methods to operationalize group entitativity. Hence, this study demonstrates that spontaneously tracking the perspectives of individuals displayed in a crowd has a boundary condition and that people can spontaneously compute what individuals in high entitative groups see.
Collapse
|
26
|
Abstract
In a busy space, people encounter many other people with different viewpoints, but classic studies of perspective-taking examine only one agent at a time. This paper explores the issue of selectivity in visual perspective-taking (VPT) when different people are available to interact with. We consider the hypothesis that humanization impacts on VPT in four studies using virtual reality methods. Experiments 1 and 2 use the director task to show that for more humanized agents (an in-group member or a virtual human agent), participants were more likely to use VPT to achieve lower error rate. Experiments 3 and 4 used a two-agent social mental rotation task to show that participants are faster and more accurate to recognize items which are oriented towards a more humanized agent (an in-group member or a naturally moving agent). All results support the claim that humanization alters the propensity to engage in VPT in rich social contexts.
Collapse
|
27
|
Kampis D, Southgate V. Altercentric Cognition: How Others Influence Our Cognitive Processing. Trends Cogn Sci 2020; 24:945-959. [PMID: 32981846 DOI: 10.1016/j.tics.2020.09.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/01/2020] [Accepted: 09/02/2020] [Indexed: 12/18/2022]
Abstract
Humans are ultrasocial, yet, theories of cognition have often been occupied with the solitary mind. Over the past decade, an increasing volume of work has revealed how individual cognition is influenced by the presence of others. Not only do we rapidly identify others in our environment, but we also align our attention with their attention, which influences what we perceive, represent, and remember, even when our immediate goals do not involve coordination. Here, we refer to the human sensitivity to others and to the targets and content of their attention as 'altercentrism'; and aim to bring seemingly disparate findings together, suggesting that they are all reflections of the altercentric nature of human cognition.
Collapse
Affiliation(s)
- Dora Kampis
- Department of Psychology, University of Copenhagen, Øster Farimagsgade 2A, 1353 Copenhagen, Denmark.
| | - Victoria Southgate
- Department of Psychology, University of Copenhagen, Øster Farimagsgade 2A, 1353 Copenhagen, Denmark.
| |
Collapse
|
28
|
Nanay B. Vicarious representation: A new theory of social cognition. Cognition 2020; 205:104451. [PMID: 32950911 PMCID: PMC7684465 DOI: 10.1016/j.cognition.2020.104451] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 08/19/2020] [Accepted: 08/26/2020] [Indexed: 11/28/2022]
Abstract
Theory of mind, the attribution of mental states to others is one form of social cognition. The aim of this paper is to highlight the importance of another, much simpler, form of social cognition, which I call vicarious representation. Vicarious representation is the attribution of other-centered properties to objects. This mental capacity is different from, and much simpler than, theory of mind as it does not imply the understanding (or representation) of the mental (or even perceptual) states of another agents. I argue that the most convincing experiments that are supposed to show that non-human primates have theory of mind in fact demonstrate that they are capable of vicarious representation. The same is true for the experiments about the theory of mind of infants under 12 months.
Collapse
Affiliation(s)
- Bence Nanay
- Centre for Philosophical Psychology, University of Antwerp, D 413, Grote Kauwenberg 18, 2000 Antwerp, Belgium; Peterhouse, University of Cambridge, Cambridge CB2 1RD, UK.
| |
Collapse
|
29
|
Cole GG, Millett AC, Samuel S, Eacott MJ. Perspective-Taking: In Search of a Theory. Vision (Basel) 2020; 4:vision4020030. [PMID: 32492784 PMCID: PMC7355554 DOI: 10.3390/vision4020030] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/18/2020] [Accepted: 05/05/2020] [Indexed: 12/02/2022] Open
Abstract
Perspective-taking has been one of the central concerns of work on social attention and developmental psychology for the past 60 years. Despite its prominence, there is no formal description of what it means to represent another’s viewpoint. The present article argues that such a description is now required in the form of theory—a theory that should address a number of issues that are central to the notion of assuming another’s viewpoint. After suggesting that the mental imagery debate provides a good framework for understanding some of the issues and problems surrounding perspective-taking, we set out nine points that we believe any theory of perspective-taking should consider.
Collapse
Affiliation(s)
- Geoff G. Cole
- Centre for Brain Science, University of Essex, Colchester CO4 3SQ, UK; (S.S.); (M.J.E.)
- Correspondence:
| | - Abbie C. Millett
- School of Social Sciences and Humanities, University of Suffolk, Ipswich IP4 1QJ, UK;
| | - Steven Samuel
- Centre for Brain Science, University of Essex, Colchester CO4 3SQ, UK; (S.S.); (M.J.E.)
| | - Madeline J. Eacott
- Centre for Brain Science, University of Essex, Colchester CO4 3SQ, UK; (S.S.); (M.J.E.)
| |
Collapse
|
30
|
Ward E, Ganis G, McDonough KL, Bach P. Perspective taking as virtual navigation? Perceptual simulation of what others see reflects their location in space but not their gaze. Cognition 2020; 199:104241. [PMID: 32105910 DOI: 10.1016/j.cognition.2020.104241] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 02/14/2020] [Accepted: 02/17/2020] [Indexed: 11/27/2022]
Abstract
Other peoples' (imagined) visual perspectives are represented perceptually in a similar way to our own, and can drive bottom-up processes in the same way as own perceptual input (Ward, Ganis, & Bach, 2019). Here we test directly whether visual perspective taking is driven by where another person is looking, or whether these perceptual simulations represent their position in space more generally. Across two experiments, we asked participants to identify whether alphanumeric characters, presented at one of eight possible orientations away from upright, were presented normally, or in their mirror-inverted form (e.g. "R" vs. "Я"). In some scenes, a person would appear sitting to the left or the right of the participant. We manipulated either between-trials (Experiment 1) or between-subjects (Experiment 2), the gaze-direction of the inserted person, such that they either (1) looked towards the to-be-judged item, (2) averted their gaze away from the participant, or (3) gazed out towards the participant (Exp. 2 only). In the absence of another person, we replicated the well-established mental rotation effect, where recognition of items becomes slower the more items are oriented away from upright (e.g. Shepard and Meltzer, 1971). Crucially, in both experiments and in all conditions, this response pattern changed when another person was inserted into the scene. People spontaneously took the perspective of the other person and made faster judgements about the presented items in their presence if the characters were oriented towards upright to them. The gaze direction of this other person did not influence these effects. We propose that visual perspective taking is therefore a general spatial-navigational ability, allowing us to calculate more easily how a scene would (in principle) look from another position in space, and that such calculations reflect the spatial location of another person, but not their gaze.
Collapse
Affiliation(s)
- Eleanor Ward
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK.
| | - Giorgio Ganis
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| | - Katrina L McDonough
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| | - Patric Bach
- School of Psychology, University of Plymouth, Drake Circus, Devon PL4 8AA, UK
| |
Collapse
|
31
|
Abstract
Theory of mind (ToM) is defined as the ability to attribute mental states to oneself and others and is often said to be one of the cornerstones of efficient social interaction. In recent years, a number of authors have suggested that one particular ToM process occurs spontaneously in that it is rapid and outside of conscious control. This work has argued that humans efficiently compute the visual perspective of other individuals. In this article, we present a critique of this notion both on empirical and theoretical grounds. We argue that the experiments and paradigms that purportedly demonstrate spontaneous perspective-taking have not as yet convincingly demonstrated the existence of such a phenomenon. We also suggest that it is not possible to represent the percept of another person, spontaneous or otherwise. Indeed, the perspective-taking field has suggested that humans can represent the visual experience of others. That is, going beyond assuming that we can represent another's viewpoint in anything other than symbolic form. In this sense, the field suffers from the same problem that afflicted the "pictorial" theory in the mental imagery debate. In the last section we present a number of experiments designed to provide a more thorough assessment of whether humans can indeed represent the visual experience of others.
Collapse
|
32
|
Attribution of intentional agency towards robots reduces one’s own sense of agency. Cognition 2020; 194:104109. [DOI: 10.1016/j.cognition.2019.104109] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 01/06/2023]
|
33
|
The Benefit of Seeing in Company. Trends Cogn Sci 2019; 23:451-453. [DOI: 10.1016/j.tics.2019.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 03/19/2019] [Indexed: 11/22/2022]
|