1
|
Wieringa MS, Müller BCN, Bijlstra G, Bosse T. Robots are both anthropomorphized and dehumanized when harmed intentionally. COMMUNICATIONS PSYCHOLOGY 2024; 2:72. [PMID: 39242902 PMCID: PMC11332229 DOI: 10.1038/s44271-024-00116-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 06/26/2024] [Indexed: 09/09/2024]
Abstract
The harm-made mind phenomenon implies that witnessing intentional harm towards agents with ambiguous minds, such as robots, leads to augmented mind perception in these agents. We conducted two replications of previous work on this effect and extended it by testing if robots that detect and simulate emotions elicit a stronger harm-made mind effect than robots that do not. Additionally, we explored if someone is perceived as less prosocial when harming a robot compared to treating it kindly. The harm made mind-effect was replicated: participants attributed a higher capacity to experience pain to the robot when it was harmed, compared to when it was not harmed. We did not find evidence that this effect was influenced by the robot's ability to detect and simulate emotions. There were significant but conflicting direct and indirect effects of harm on the perception of mind in the robot: while harm had a positive indirect effect on mind perception in the robot through the perceived capacity for pain, the direct effect of harm on mind perception was negative. This suggests that robots are both anthropomorphized and dehumanized when harmed intentionally. Additionally, the results showed that someone is perceived as less prosocial when harming a robot compared to treating it kindly.
Collapse
Affiliation(s)
| | - Barbara C N Müller
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | - Gijsbert Bijlstra
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | - Tibor Bosse
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
2
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
3
|
Linnunsalo S, Küster D, Yrttiaho S, Peltola MJ, Hietanen JK. Psychophysiological responses to eye contact with a humanoid robot: Impact of perceived intentionality. Neuropsychologia 2023; 189:108668. [PMID: 37619935 DOI: 10.1016/j.neuropsychologia.2023.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/20/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.
Collapse
Affiliation(s)
- Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| | - Dennis Küster
- Cognitive Systems Lab, Department of Computer Science, University of Bremen, Bremen, Germany
| | - Santeri Yrttiaho
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland; Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| |
Collapse
|
4
|
Tzelios K, Williams LA, Omerod J, Bliss-Moreau E. Evidence of the unidimensional structure of mind perception. Sci Rep 2022; 12:18978. [PMID: 36348009 PMCID: PMC9643359 DOI: 10.1038/s41598-022-23047-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 10/25/2022] [Indexed: 11/09/2022] Open
Abstract
The last decade has witnessed intense interest in how people perceive the minds of other entities (humans, non-human animals, and non-living objects and forces) and how this perception impacts behavior. Despite the attention paid to the topic, the psychological structure of mind perception-that is, the underlying properties that account for variance across judgements of entities-is not clear and extant reports conflict in terms of how to understand the structure. In the present research, we evaluated the psychological structure of mind perception by having participants evaluate a wide array of human, non-human animal, and non-animal entities. Using an entirely within-participants design, varied measurement approaches, and data-driven analyses, four studies demonstrated that mind perception is best conceptualized along a single dimension.
Collapse
Affiliation(s)
| | | | - John Omerod
- School of Mathematics and Statistics, University of Sydney, Sydney, Australia
| | - Eliza Bliss-Moreau
- Department of Psychology, California National Primate Research Center, University of California Davis, Davis, USA
| |
Collapse
|
5
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
6
|
Swiderska A, Küster D. Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism. Cogn Sci 2021; 44:e12872. [PMID: 33020966 DOI: 10.1111/cogs.12872] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 04/24/2020] [Accepted: 04/29/2020] [Indexed: 11/29/2022]
Abstract
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
Collapse
Affiliation(s)
| | - Dennis Küster
- Department of Computer Science, University of Bremen
| |
Collapse
|
7
|
Harris J, Anthis JR. The Moral Consideration of Artificial Entities: A Literature Review. SCIENCE AND ENGINEERING ETHICS 2021; 27:53. [PMID: 34370075 PMCID: PMC8352798 DOI: 10.1007/s11948-021-00331-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/20/2021] [Indexed: 05/09/2023]
Abstract
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage "information ethics" and "social-relational" approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Collapse
Affiliation(s)
| | - Jacy Reese Anthis
- Department of Sociology, University of Chicago, 1126 East 59th Street, Chicago, IL, 60637, USA
| |
Collapse
|
8
|
Saving Private Robot: Risks and Advantages of Anthropomorphism in Agent-Soldier Teams. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00755-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
9
|
Küster D, Swiderska A. Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2020; 56:454-465. [PMID: 32935359 DOI: 10.1002/ijop.12715] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 08/09/2020] [Indexed: 01/06/2023]
Abstract
According to moral typecasting theory, good- and evil-doers (agents) interact with the recipients of their actions (patients) in a moral dyad. When this dyad is completed, mind attribution towards intentionally harmed liminal minds is enhanced. However, from a dehumanisation view, malevolent actions may instead result in a denial of humanness. To contrast both accounts, a visual vignette experiment (N = 253) depicted either malevolent or benevolent intentions towards robotic or human avatars. Additionally, we examined the role of harm-salience by showing patients as either harmed, or still unharmed. The results revealed significantly increased mind attribution towards visibly harmed patients, mediated by perceived pain and expressed empathy. Benevolent and malevolent intentions were evaluated respectively as morally right or wrong, but their impact on the patient was diminished for the robotic avatar. Contrary to dehumanisation predictions, our manipulation of intentions failed to affect mind perception. Nonetheless, benevolent intentions reduced dehumanisation of the patients. Moreover, when pain and empathy were statistically controlled, the effect of intentions on mind perception was mediated by dehumanisation. These findings suggest that perceived intentions might only be indirectly tied to mind perception, and that their role may be better understood when additionally accounting for empathy and dehumanisation.
Collapse
Affiliation(s)
- Dennis Küster
- Department of Computer Science, University of Bremen, Germany
| | | |
Collapse
|
10
|
Harris LT, van Etten N, Gimenez-Fernandez T. Exploring how harming and helping behaviors drive prediction and explanation during anthropomorphism. Soc Neurosci 2020; 16:39-56. [PMID: 32698660 DOI: 10.1080/17470919.2020.1799859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Cacioppo and colleagues advanced the study of anthropomorphism by positing three motives that moderated the occurrence of this phenomenon; belonging, effectance, and explanation. Here, we further this literature by exploring the extent to which the valence of a target's behavior influences its anthropomorphism when perceivers attempt to explain and predict that target's behavior, and the involvement of brain regions associated with explanation and prediction in such anthropomorphism. Participants viewed videos of varying visually complex agents - geometric shapes, computer generated (CG) faces, and greebles - in nonrandom motion performing harming and helping behaviors. Across two studies, participants reported a narrative that explained the observed behavior (both studies) while we recorded brain activity (study one), and participants predicted future behavior of the protagonist shapes (study two). Brain regions implicated in prediction error (striatum), not language generation (inferior frontal gyrus; IFG) engaged more to harming than helping behaviors during the anthropomorphism of such stimuli. Behaviorally, we found greater anthropomorphism in explanations of harming rather than helping behaviors, but the opposite pattern when participants predicted the agents' behavior. Together, these studies build upon the anthropomorphism literature by exploring how the valence of behavior drives explanation and prediction.
Collapse
Affiliation(s)
- Lasana T Harris
- Department of Experimental Psychology, University College London , London, UK
| | - Noor van Etten
- Department of Social and Organizational Psychology, Leiden University , Leiden, Netherlands
| | | |
Collapse
|
11
|
Balas B, Auen A. Perceiving Animacy in Own-and Other-Species Faces. Front Psychol 2019; 10:29. [PMID: 30728795 PMCID: PMC6351462 DOI: 10.3389/fpsyg.2019.00029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 01/07/2019] [Indexed: 11/17/2022] Open
Abstract
Though artificial faces of various kinds are rapidly becoming more and more life-like due to advances in graphics technology (Suwajanakorn et al., 2015; Booth et al., 2017), observers can typically distinguish real faces from artificial faces. In general, face recognition is tuned to experience such that expert-level processing is most evident for faces that we encounter frequently in our visual world, but the extent to which face animacy perception is also tuned to in-group vs. out-group categories remains an open question. In the current study, we chose to examine how the perception of animacy in human faces and dog faces was affected by face inversion and the duration of face images presented to adult observers. We hypothesized that the impact of these manipulations may differ as a function of species category, indicating that face animacy perception is tuned for in-group faces. Briefly, we found evidence of such a differential impact, suggesting either that distinct mechanisms are used to evaluate the "life" in a face for in-group and out-group faces, or that the efficiency of a common mechanism varies substantially as a function of visual expertise.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, United States
- Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States
| | - Amanda Auen
- Department of Psychology, North Dakota State University, Fargo, ND, United States
| |
Collapse
|