1
|
Jastrzab LE, Chaudhury B, Ashley SA, Koldewyn K, Cross ES. Beyond human-likeness: Socialness is more influential when attributing mental states to robots. iScience 2024; 27:110070. [PMID: 38947497 PMCID: PMC11214418 DOI: 10.1016/j.isci.2024.110070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/08/2024] [Accepted: 05/17/2024] [Indexed: 07/02/2024] Open
Abstract
We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.
Collapse
Affiliation(s)
- Laura E. Jastrzab
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Bishakha Chaudhury
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Sarah A. Ashley
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Division of Psychiatry, Institute of Mental Health, University College London, London, UK
| | - Kami Koldewyn
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
| | - Emily S. Cross
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
- Chair for Social Brain Sciences, Department of Humanities, Social and Political Sciences, ETHZ, Zürich, Switzerland
| |
Collapse
|
2
|
Abubshait A, Weis PP, Momen A, Wiese E. Perceptual discrimination in the face perception of robots is attenuated compared to humans. Sci Rep 2023; 13:16708. [PMID: 37794045 PMCID: PMC10550918 DOI: 10.1038/s41598-023-42510-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Italian Institute of Technology, Genoa, Italy.
- George Mason University, Fairfax, VA, USA.
| | - Patrick P Weis
- George Mason University, Fairfax, VA, USA
- Julius Maximilians University, Würzburg, Germany
| | - Ali Momen
- George Mason University, Fairfax, VA, USA
- Air Force Academy, Colorado Springs, CO, USA
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
- Berlin Institute of Technology, Berlin, Germany
| |
Collapse
|
3
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
4
|
Abubshait A, Siri G, Wykowska A. Does attributing mental states to a robot influence accessibility of information represented during reading? Acta Psychol (Amst) 2022; 228:103660. [PMID: 35779453 DOI: 10.1016/j.actpsy.2022.103660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 06/17/2022] [Accepted: 06/22/2022] [Indexed: 11/29/2022] Open
Abstract
When we read fiction, we encounter characters that interact in the story. As such, we encode that information and comprehend the stories. Prior studies suggest that this comprehension process is facilitated by taking the perspective of characters during reading. Thus, two questions of interest are whether people take the perspective of characters that are not perceived as capable of experiencing perspectives (e.g., robots), and whether current models of language comprehension can explain these differences between human and nonhuman protagonists (or lack thereof) during reading. The study aims to (1) compare the situation model (i.e., a model that factors in a protagonist's perspective) and the RI-VAL model (which relies more on comparisons of newly acquired information with information stored in long term memory) and (2) investigate whether differences in accessibility of information differ based on adopting the intentional stance towards a robot. To address the aims of our study, we designed a preregistered experiment in which participants read stories about one of three protagonists (an intentional robot, a mechanistic robot and a human) and answered questions about objects that were either occluded or not occluded from the protagonist's view. Based on the situation model, we expected faster responses to items that were not occluded compared to those that were occluded (i.e., the occlusion effect). However, based on the RI-VAL model, we expected overall differences between the protagonists would arise due to inconsistency with general world knowledge. The results of the pre-registered analysis showed no differences between the protagonists, nor differences in occlusion. However, a post-hoc analysis showed that the occlusion effect was shown only for the intentional robot but not for the human, nor mechanistic robot. Results also showed that depending on the age of the readers, the RI-VAL or the situation model is able to explain the results such that older participants "simulated" the situation about which they read (situation model), while younger adults compared new information with information stored in long-term memory (RI-VAL model). This suggests that comparing to information in long term memory is cognitively more costly. Therefore, with older adults used less cognitively demanding strategy of simulation.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- S4HRI: Social Cognition in Human Robot Interaction Unit, Istituto Italiano di Tecnologia, Genova, Italy.
| | - Giulia Siri
- S4HRI: Social Cognition in Human Robot Interaction Unit, Istituto Italiano di Tecnologia, Genova, Italy
| | - Agnieszka Wykowska
- S4HRI: Social Cognition in Human Robot Interaction Unit, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
5
|
Perez-Osorio J, Abubshait A, Wykowska A. Irrelevant Robot Signals in a Categorization Task Induce Cognitive Conflict in Performance, Eye Trajectories, the N2 ERP-EEG Component, and Frontal Theta Oscillations. J Cogn Neurosci 2021; 34:108-126. [PMID: 34705044 DOI: 10.1162/jocn_a_01786] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot "moved" the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.
Collapse
|
6
|
Papagni G, Koeszegi S. A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09567-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractArtificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature on robots and virtual agents, we defend the thesis that approaching these artificial agents ‘as if’ they had intentions and forms of social, goal-oriented rationality is the only way to deal with their complexity on a daily base. Specifically, we claim that this is the only viable strategy for non-expert users to understand, predict and perhaps learn from artificial agents’ behavior in everyday social contexts. Furthermore, we argue that as long as agents are transparent about their design principles and functionality, attributing intentions to their actions is not only essential, but also ethical. Additionally, we propose design guidelines inspired by the debate over the adoption of the intentional stance.
Collapse
|
7
|
Abubshait A, Wykowska A. Repetitive Robot Behavior Impacts Perception of Intentionality and Gaze-Related Attentional Orienting. Front Robot AI 2020; 7:565825. [PMID: 33501328 PMCID: PMC7805881 DOI: 10.3389/frobt.2020.565825] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 09/25/2020] [Indexed: 11/22/2022] Open
Abstract
Gaze behavior is an important social signal between humans as it communicates locations of interest. People typically orient their attention to where others look as this informs about others' intentions and future actions. Studies have shown that humans can engage in similar gaze behavior with robots but presumably more so when they adopt the intentional stance toward them (i.e., believing robot behaviors are intentional). In laboratory settings, the phenomenon of attending toward the direction of others' gaze has been examined with the use of the gaze-cueing paradigm. While the gaze-cueing paradigm has been successful in investigating the relationship between adopting the intentional stance toward robots and attention orienting to gaze cues, it is unclear if the repetitiveness of the gaze-cueing paradigm influences adopting the intentional stance. Here, we examined if the duration of exposure to repetitive robot gaze behavior in a gaze-cueing task has a negative impact on subjective attribution of intentionality. Participants performed a short, medium, or long face-to-face gaze-cueing paradigm with an embodied robot while subjective ratings were collected pre and post the interaction. Results show that participants in the long exposure condition had the smallest change in their intention attribution scores, if any, while those in the short exposure condition had a positive change in their intention attribution, indicating that participants attributed more intention to the robot after short interactions. The results also show that attention orienting to robot gaze-cues was positively related to how much intention was attributed to the robot, but this relationship became more negative as the length of exposure increased. In contrast to subjective ratings, the gaze-cueing effects (GCEs) increased as a function of the duration of exposure to repetitive behavior. The data suggest a tradeoff between the desired number of trials needed for observing various mechanisms of social cognition, such as GCEs, and the likelihood of adopting the intentional stance toward a robot.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Social Cognition in Human-Robot Interaction (S4HRI) Unit, Istituto Italiano di Tecnologia, Genova, Italy
| | | |
Collapse
|
8
|
Abubshait A, Momen A, Wiese E. Pre-exposure to Ambiguous Faces Modulates Top-Down Control of Attentional Orienting to Counterpredictive Gaze Cues. Front Psychol 2020; 11:2234. [PMID: 33013584 PMCID: PMC7509110 DOI: 10.3389/fpsyg.2020.02234] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/10/2020] [Indexed: 11/13/2022] Open
Abstract
Understanding and reacting to others' nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), are essential for social interactions, as it is important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals' social relevance. For example, when a gazer is believed to be an entity "with a mind" (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent's physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer's physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguous agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2), and if resolving the conflict related to the agent's categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambiguous stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze-cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze-cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction (HRI) but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.
Collapse
Affiliation(s)
| | - Ali Momen
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
9
|
Does Context Matter? Effects of Robot Appearance and Reliability on Social Attention Differs Based on Lifelikeness of Gaze Task. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00675-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
10
|
Social Cognition in the Age of Human–Robot Interaction. Trends Neurosci 2020; 43:373-384. [DOI: 10.1016/j.tins.2020.03.013] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/04/2020] [Accepted: 03/26/2020] [Indexed: 11/22/2022]
|
11
|
Desideri L, Bonifacci P, Croati G, Dalena A, Gesualdo M, Molinario G, Gherardini A, Cesario L, Ottaviani C. The Mind in the Machine: Mind Perception Modulates Gaze Aversion During Child–Robot Interaction. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00656-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Azhari A, Rigo P, Tan PY, Neoh MJY, Esposito G. Viewing Romantic and Friendship Interactions Activate Prefrontal Regions in Persons With High Openness Personality Trait. Front Psychol 2020; 11:490. [PMID: 32265795 PMCID: PMC7108494 DOI: 10.3389/fpsyg.2020.00490] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 03/02/2020] [Indexed: 11/13/2022] Open
Abstract
The personality traits we have and the closeness we experience in our relationships inevitably color the lenses through which we perceive social interactions. As such, the varying perceptions of our social relationships could indicate underlying differences in neural processes that occur in the prefrontal cortex (PFC), a brain region involved in social cognition. However, little is known of how personality traits and relationship closeness with others influence brain responses when viewing social interactions between kin (i.e., siblings) and non-kin (i.e., romantic, friends) partners. In the present study, functional near-infrared spectroscopy (fNIRS) was employed to investigate prefrontal cortical activation patterns in response to three 1-min mute video clips depicting a male–female couple interacting with comparably mild levels of affection while baking, exercising, and eating. The context of the interaction was manipulated by informing participants about the type of relationship each couple in the three video clips was in: (a) romantic partners, (b) friends, or (c) siblings. By changing only the contextual labels of the videos, we revealed distinct PFC responses to relationship type as a function of openness trait, closeness with romantic partner, and closeness with siblings. As openness score increased, we observed an enhanced activation of the left inferior frontal gyrus (IFG), the left anterior PFC (aPFC), and the right frontal eye fields (FEFs) in response to the video labeled romantic and friendship, but a reduction in these areas in the siblings condition. Similarly, individuals with higher romantic and sibling closeness showed increased activation in the IFG and dorsolateral PFC (dlPFC) in response to romantic and friendship conditions, but decreased activation in the siblings condition. Differences in PFC activations toward romantic, friendship, and sibling relationships reflect underlying variations in the cognitive processing of social interactions, depending on the personality (i.e., openness) and experiences (i.e., relationship closeness) of the individual, as well as the relationship type with which the interaction is labeled.
Collapse
Affiliation(s)
- Atiqah Azhari
- School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Paola Rigo
- School of Social Sciences, Nanyang Technological University, Singapore, Singapore.,Department of Developmental Psychology and Socialisation, University of Padua, Padua, Italy
| | - Pei Yu Tan
- School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | | | - Gianluca Esposito
- School of Social Sciences, Nanyang Technological University, Singapore, Singapore.,Department of Psychology and Cognitive Science, University of Trento, Trento, Italy.,Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
13
|
Wiese E, Abubshait A, Azarian B, Blumberg EJ. Brain stimulation to left prefrontal cortex modulates attentional orienting to gaze cues. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180430. [PMID: 30852996 DOI: 10.1098/rstb.2018.0430] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Abdulaziz Abubshait
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Bobby Azarian
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Eric J Blumberg
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| |
Collapse
|
14
|
Reinhold AS, Sanguinetti-Scheck JI, Hartmann K, Brecht M. Behavioral and neural correlates of hide-and-seek in rats. Science 2019; 365:1180-1183. [DOI: 10.1126/science.aax4705] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 08/12/2019] [Indexed: 11/02/2022]
Abstract
Evolutionary, cognitive, and neural underpinnings of mammalian play are not yet fully elucidated. We played hide-and-seek, an elaborate role-play game, with rats. We did not offer food rewards but engaged in playful interactions after finding or being found. Rats quickly learned the game and learned to alternate between hiding versus seeking roles. They guided seeking by vision and memories of past hiding locations and emitted game event–specific vocalizations. When hiding, rats vocalized infrequently and they preferred opaque over transparent hiding enclosures, a preference not observed during seeking. Neuronal recordings revealed intense prefrontal cortex activity that varied with game events and trial types (“hide” versus “seek”) and might instruct role play. The elaborate cognitive capacities for hide-and-seek in rats suggest that this game might be evolutionarily old.
Collapse
|
15
|
Schellen E, Wykowska A. Intentional Mindset Toward Robots-Open Questions and Methodological Challenges. Front Robot AI 2019; 5:139. [PMID: 33501017 PMCID: PMC7805849 DOI: 10.3389/frobt.2018.00139] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 12/19/2018] [Indexed: 11/13/2022] Open
Abstract
Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets, and beliefs concerning the robot's inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance toward robots, i.e., the tendency to predict and explain the robots' behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.
Collapse
Affiliation(s)
- Elef Schellen
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|