1
|
McCrae RR. Seeking a Philosophical Basis for Trait Psychology. Psychol Rep 2024; 127:2784-2811. [PMID: 36269570 DOI: 10.1177/00332941221132992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
I summarize an early effort to provide a conceptual basis for psychology. Natural science studies material objects, and its methods and assumptions may not be appropriate for the study of persons. Persons exist within the natural attitude and are characterized by such properties as temporality, responsibility, normality, and identity. Contemporary theories of mind focus on people's understanding of how minds make decisions and shape behavior, but say little about the nature of the entity that possesses a mind; ethnopsychologies are concerned with cultural variations in beliefs about accidental rather than essential aspects of human psychology. The lay philosophical view of the person sketched here is intended to be broader and deeper. It is particularly relevant to trait psychology, appears to have been implicit in much trait research, and is generally consistent with empirical findings on personality traits.
Collapse
|
2
|
Espinosa R, Borau S, Treich N. Impact of NGOs' undercover videos on citizens' emotions and pro-social behaviors. Sci Rep 2024; 14:20584. [PMID: 39232015 PMCID: PMC11374992 DOI: 10.1038/s41598-024-68335-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 07/22/2024] [Indexed: 09/06/2024] Open
Abstract
Undercover videos have become a popular tool among NGOs to influence public opinion and generate engagement for the NGO's cause. These videos are seen as a powerful and cost-effective way of bringing about social change, as they provide first-hand evidence and generate a strong emotional response among those who see them. In this paper, we empirically assess the impact of undercover videos on support for the cause. We in addition analyze whether the increased engagement among viewers is driven by the negative emotional reactions produced by the video. To do so, we design an online experiment that enables us to estimate both the total and emotion-mediated treatment effects on engagement by randomly exposing participants to an undercover video (of animal abuse) and randomly introducing a cooling-off period. Using a representative sample of the French population (N=3,310), we find that the video successfully increases actions in favor of animals (i.e., donations to NGOs and petitions), but we fail to prove that this effect is due to the presence of primary emotions induced by the video. Last, we investigate whether activists correctly anticipate their undercover videos' (emotional) impact via a prediction study involving activists (exploratory analysis). PROTOCOL REGISTRATION: This manuscript is a Stage-2 working paper of a Registered Report that received In-Principle-Acceptance from Scientific Reports on November 20th, 2023 [ Link to Stage-1 ]. The Stage-1 that received In-Principal-Acceptance can be found here: https://osf.io/8cg2d .
Collapse
Affiliation(s)
| | | | - Nicolas Treich
- University of Toulouse Capitole, Toulouse School of Economics, INRAE, IAST, Toulouse, France
| |
Collapse
|
3
|
Wieringa MS, Müller BCN, Bijlstra G, Bosse T. Robots are both anthropomorphized and dehumanized when harmed intentionally. COMMUNICATIONS PSYCHOLOGY 2024; 2:72. [PMID: 39242902 PMCID: PMC11332229 DOI: 10.1038/s44271-024-00116-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 06/26/2024] [Indexed: 09/09/2024]
Abstract
The harm-made mind phenomenon implies that witnessing intentional harm towards agents with ambiguous minds, such as robots, leads to augmented mind perception in these agents. We conducted two replications of previous work on this effect and extended it by testing if robots that detect and simulate emotions elicit a stronger harm-made mind effect than robots that do not. Additionally, we explored if someone is perceived as less prosocial when harming a robot compared to treating it kindly. The harm made mind-effect was replicated: participants attributed a higher capacity to experience pain to the robot when it was harmed, compared to when it was not harmed. We did not find evidence that this effect was influenced by the robot's ability to detect and simulate emotions. There were significant but conflicting direct and indirect effects of harm on the perception of mind in the robot: while harm had a positive indirect effect on mind perception in the robot through the perceived capacity for pain, the direct effect of harm on mind perception was negative. This suggests that robots are both anthropomorphized and dehumanized when harmed intentionally. Additionally, the results showed that someone is perceived as less prosocial when harming a robot compared to treating it kindly.
Collapse
Affiliation(s)
| | - Barbara C N Müller
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | - Gijsbert Bijlstra
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | - Tibor Bosse
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Guingrich RE, Graziano MSA. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front Psychol 2024; 15:1322781. [PMID: 38605842 PMCID: PMC11008604 DOI: 10.3389/fpsyg.2024.1322781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI's inherent conscious or moral status.
Collapse
Affiliation(s)
- Rose E. Guingrich
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton School of Public and International Affairs, Princeton University, Princeton, NJ, United States
| | - Michael S. A. Graziano
- Department of Psychology, Princeton University, Princeton, NJ, United States
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| |
Collapse
|
5
|
Grigoreva AD, Rottman J, Tasimi A. When does "no" mean no? Insights from sex robots. Cognition 2024; 244:105687. [PMID: 38154450 DOI: 10.1016/j.cognition.2023.105687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 11/29/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023]
Abstract
Although sexual assault is widely accepted as morally wrong, not all instances of sexual assault are evaluated in the same way. Here, we ask whether different characteristics of victims affect people's moral evaluations of sexual assault perpetrators, and if so, how. We focus on sex robots (i.e., artificially intelligent humanoid social robots designed for sexual gratification) as victims in the present studies because they serve as a clean canvas onto which we can paint different human-like attributes to probe people's moral intuitions regarding sensitive topics. Across four pre-registered experiments conducted with American adults on Prolific (N = 2104), we asked people to evaluate the wrongness of sexual assault against AI-powered robots. People's moral judgments were influenced by the victim's mental capacities (Studies 1 & 2), the victim's interpersonal function (Study 3), the victim's ontological type (Study 4), and the transactional context of the human-robot relationship (Study 4). Overall, by investigating moral reasoning about transgressions against AI robots, we were able to gain unique insights into how people's moral judgments about sexual transgressions can be influenced by victim attributes.
Collapse
Affiliation(s)
| | - Joshua Rottman
- Department of Psychology, Franklin & Marshall College, P.O. Box 3003, Lancaster, PA 17604, USA
| | - Arber Tasimi
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| |
Collapse
|
6
|
Linnunsalo S, Küster D, Yrttiaho S, Peltola MJ, Hietanen JK. Psychophysiological responses to eye contact with a humanoid robot: Impact of perceived intentionality. Neuropsychologia 2023; 189:108668. [PMID: 37619935 DOI: 10.1016/j.neuropsychologia.2023.108668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 06/20/2023] [Accepted: 08/21/2023] [Indexed: 08/26/2023]
Abstract
Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.
Collapse
Affiliation(s)
- Samuli Linnunsalo
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| | - Dennis Küster
- Cognitive Systems Lab, Department of Computer Science, University of Bremen, Bremen, Germany
| | - Santeri Yrttiaho
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland
| | - Mikko J Peltola
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland; Tampere Institute for Advanced Study, Tampere University, Tampere, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, Tampere University, Tampere, Finland.
| |
Collapse
|
7
|
Banks J, Bowman ND. Perceived Moral Patiency of Social Robots: Explication and Scale Development. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00950-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
8
|
Banks J, Koban K, Haggadone B. Avoiding the Abject and Seeking the Script: Perceived Mind, Morality, and Trust in a Persuasive Social Robot. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3572036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots’ persuasive impact; however, little is known of how perceptions of two key human capacities—mind and morality—function in robots’ persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trust for a moral appeal and by capacity trust for a logical appeal. Via online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces—either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance and (b) perceiving agentic capacity diminishes compliance, though capacity trust can moderate that reduction. For logical appeals, social-moral mental capacities promote compliance, moderated by capacity trust. Findings suggest that, in this compliance scenario, the accessibility of schemas and scripts for engaging robots as social-moral actors may be central to whether/how perceived mind, morality, and trust function in machine persuasion.
Collapse
Affiliation(s)
- Jaime Banks
- School of Information Studies, Syracuse University
| | - Kevin Koban
- Department of Communication, University of Vienna
| | | |
Collapse
|
9
|
Tzelios K, Williams LA, Omerod J, Bliss-Moreau E. Evidence of the unidimensional structure of mind perception. Sci Rep 2022; 12:18978. [PMID: 36348009 PMCID: PMC9643359 DOI: 10.1038/s41598-022-23047-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 10/25/2022] [Indexed: 11/09/2022] Open
Abstract
The last decade has witnessed intense interest in how people perceive the minds of other entities (humans, non-human animals, and non-living objects and forces) and how this perception impacts behavior. Despite the attention paid to the topic, the psychological structure of mind perception-that is, the underlying properties that account for variance across judgements of entities-is not clear and extant reports conflict in terms of how to understand the structure. In the present research, we evaluated the psychological structure of mind perception by having participants evaluate a wide array of human, non-human animal, and non-animal entities. Using an entirely within-participants design, varied measurement approaches, and data-driven analyses, four studies demonstrated that mind perception is best conceptualized along a single dimension.
Collapse
Affiliation(s)
| | | | - John Omerod
- School of Mathematics and Statistics, University of Sydney, Sydney, Australia
| | - Eliza Bliss-Moreau
- Department of Psychology, California National Primate Research Center, University of California Davis, Davis, USA
| |
Collapse
|
10
|
Improving evaluations of advanced robots by depicting them in harmful situations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
11
|
Moral psychology of nursing robots: Exploring the role of robots in dilemmas of patient autonomy. EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY 2022. [DOI: 10.1002/ejsp.2890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
12
|
When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2022. [DOI: 10.1016/j.jesp.2022.104360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
13
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
14
|
Pitardi V, Wirtz J, Paluch S, Kunz WH. Service robots, agency and embarrassing service encounters. JOURNAL OF SERVICE MANAGEMENT 2021. [DOI: 10.1108/josm-12-2020-0435] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeExtant research mainly focused on potentially negative customer responses to service robots. In contrast, this study is one of the first to explore a service context where service robots are likely to be the preferred service delivery mechanism over human frontline employees. Specifically, the authors examine how customers respond to service robots in the context of embarrassing service encounters.Design/methodology/approachThis study employs a mixed-method approach, whereby an in-depth qualitative study (study 1) is followed by two lab experiments (studies 2 and 3).FindingsResults show that interactions with service robots attenuated customers' anticipated embarrassment. Study 1 identifies a number of factors that can reduce embarrassment. These include the perception that service robots have reduced agency (e.g. are not able to make moral or social judgements) and emotions (e.g. are not able to have feelings). Study 2 tests the base model and shows that people feel less embarrassed during a potentially embarrassing encounter when interacting with service robots compared to frontline employees. Finally, Study 3 confirms that perceived agency, but not emotion, fully mediates frontline counterparty (employee vs robot) effects on anticipated embarrassment.Practical implicationsService robots can add value by reducing potential customer embarrassment because they are perceived to have less agency than service employees. This makes service robots the preferred service delivery mechanism for at least some customers in potentially embarrassing service encounters (e.g. in certain medical contexts).Originality/valueThis study is one of the first to examine a context where service robots are the preferred service delivery mechanism over human employees.
Collapse
|
15
|
Swiderska A, Küster D. Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism. Cogn Sci 2021; 44:e12872. [PMID: 33020966 DOI: 10.1111/cogs.12872] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 04/24/2020] [Accepted: 04/29/2020] [Indexed: 11/29/2022]
Abstract
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
Collapse
Affiliation(s)
| | - Dennis Küster
- Department of Computer Science, University of Bremen
| |
Collapse
|
16
|
Harris J, Anthis JR. The Moral Consideration of Artificial Entities: A Literature Review. SCIENCE AND ENGINEERING ETHICS 2021; 27:53. [PMID: 34370075 PMCID: PMC8352798 DOI: 10.1007/s11948-021-00331-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/20/2021] [Indexed: 05/09/2023]
Abstract
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage "information ethics" and "social-relational" approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Collapse
Affiliation(s)
| | - Jacy Reese Anthis
- Department of Sociology, University of Chicago, 1126 East 59th Street, Chicago, IL, 60637, USA
| |
Collapse
|
17
|
Competing with or Against Cozmo, the Robot: Influence of Interaction Context and Outcome on Mind Perception. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00668-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
18
|
Banks J, Koban K. Framing Effects on Judgments of Social Robots' (Im)Moral Behaviors. Front Robot AI 2021; 8:627233. [PMID: 34041272 PMCID: PMC8141842 DOI: 10.3389/frobt.2021.627233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 04/13/2021] [Indexed: 11/26/2022] Open
Abstract
Frames—discursive structures that make dimensions of a situation more or less salient—are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents—especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android’s (im)moral behavior, and experimentally testing how produced frames prime judgments about an android’s morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot’s morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.
Collapse
Affiliation(s)
- Jaime Banks
- College of Media and Communication, Texas Tech University, Lubbock, TX, United States
| | - Kevin Koban
- Department of Communication, University of Vienna, Vienna, Austria
| |
Collapse
|
19
|
Banks J. From Warranty Voids to Uprising Advocacy: Human Action and the Perceived Moral Patiency of Social Robots. Front Robot AI 2021; 8:670503. [PMID: 34124176 PMCID: PMC8194253 DOI: 10.3389/frobt.2021.670503] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 05/12/2021] [Indexed: 11/23/2022] Open
Abstract
Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans’ (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.
Collapse
Affiliation(s)
- Jaime Banks
- College of Media & Communication, Texas Tech University, Lubbock, TX, United States
| |
Collapse
|
20
|
Laakasuo M, Palomäki J, Köbis N. Moral Uncanny Valley: A Robot’s Appearance Moderates How its Decisions are Judged. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00738-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractArtificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human—a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
Collapse
|
21
|
Küster D, Swiderska A. Seeing the mind of robots: Harm augments mind perception but benevolent intentions reduce dehumanisation of artificial entities in visual vignettes. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2020; 56:454-465. [PMID: 32935359 DOI: 10.1002/ijop.12715] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 08/09/2020] [Indexed: 01/06/2023]
Abstract
According to moral typecasting theory, good- and evil-doers (agents) interact with the recipients of their actions (patients) in a moral dyad. When this dyad is completed, mind attribution towards intentionally harmed liminal minds is enhanced. However, from a dehumanisation view, malevolent actions may instead result in a denial of humanness. To contrast both accounts, a visual vignette experiment (N = 253) depicted either malevolent or benevolent intentions towards robotic or human avatars. Additionally, we examined the role of harm-salience by showing patients as either harmed, or still unharmed. The results revealed significantly increased mind attribution towards visibly harmed patients, mediated by perceived pain and expressed empathy. Benevolent and malevolent intentions were evaluated respectively as morally right or wrong, but their impact on the patient was diminished for the robotic avatar. Contrary to dehumanisation predictions, our manipulation of intentions failed to affect mind perception. Nonetheless, benevolent intentions reduced dehumanisation of the patients. Moreover, when pain and empathy were statistically controlled, the effect of intentions on mind perception was mediated by dehumanisation. These findings suggest that perceived intentions might only be indirectly tied to mind perception, and that their role may be better understood when additionally accounting for empathy and dehumanisation.
Collapse
Affiliation(s)
- Dennis Küster
- Department of Computer Science, University of Bremen, Germany
| | | |
Collapse
|
22
|
Harris LT, van Etten N, Gimenez-Fernandez T. Exploring how harming and helping behaviors drive prediction and explanation during anthropomorphism. Soc Neurosci 2020; 16:39-56. [PMID: 32698660 DOI: 10.1080/17470919.2020.1799859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Cacioppo and colleagues advanced the study of anthropomorphism by positing three motives that moderated the occurrence of this phenomenon; belonging, effectance, and explanation. Here, we further this literature by exploring the extent to which the valence of a target's behavior influences its anthropomorphism when perceivers attempt to explain and predict that target's behavior, and the involvement of brain regions associated with explanation and prediction in such anthropomorphism. Participants viewed videos of varying visually complex agents - geometric shapes, computer generated (CG) faces, and greebles - in nonrandom motion performing harming and helping behaviors. Across two studies, participants reported a narrative that explained the observed behavior (both studies) while we recorded brain activity (study one), and participants predicted future behavior of the protagonist shapes (study two). Brain regions implicated in prediction error (striatum), not language generation (inferior frontal gyrus; IFG) engaged more to harming than helping behaviors during the anthropomorphism of such stimuli. Behaviorally, we found greater anthropomorphism in explanations of harming rather than helping behaviors, but the opposite pattern when participants predicted the agents' behavior. Together, these studies build upon the anthropomorphism literature by exploring how the valence of behavior drives explanation and prediction.
Collapse
Affiliation(s)
- Lasana T Harris
- Department of Experimental Psychology, University College London , London, UK
| | - Noor van Etten
- Department of Social and Organizational Psychology, Leiden University , Leiden, Netherlands
| | | |
Collapse
|
23
|
Abstract
Although more individuals are relying on information provided by nonhuman agents, such as artificial intelligence and robots, little research has examined how persuasion attempts made by nonhuman agents might differ from persuasion attempts made by human agents. Drawing on construal-level theory, we posited that individuals would perceive artificial agents at a low level of construal because of the agents' lack of autonomous goals and intentions, which directs individuals' focus toward how these agents implement actions to serve humans rather than why they do so. Across multiple studies (total N = 1,668), we showed that these construal-based differences affect compliance with persuasive messages made by artificial agents. These messages are more appropriate and effective when the message represents low-level as opposed to high-level construal features. These effects were moderated by the extent to which an artificial agent could independently learn from its environment, given that learning defies people's lay theories about artificial agents.
Collapse
Affiliation(s)
- Tae Woo Kim
- Marketing Discipline Group, University of Technology Sydney
| | - Adam Duhachek
- Department of Managerial Studies, University of Illinois at Chicago.,Marketing, University of Sydney
| |
Collapse
|
24
|
Peterson A, Kostick KM, O'Brien KA, Blumenthal-Barby J. Seeing minds in patients with disorders of consciousness. Brain Inj 2019; 34:390-398. [PMID: 31880960 DOI: 10.1080/02699052.2019.1706000] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: To explore the ways in which health care professionals and families understand terms and concepts associated with disorders of consciousness.Methods: Open-ended, semi-structured interviews were conducted with 20 health care professionals and 18 family caregivers affiliated with a disorders of consciousness program within a nationally ranked rehabilitation facility in the United States.Results: Analysis revealed that: (1) disagreement between some health care professionals and family caregivers regarding the presence of consciousness can arise due to differing beliefs about a patient experiencing pain, and differences in the length of time family caregivers spend with patients relative to clinical staff; (2) some health care professionals and family caregivers use nonclinical terms and concepts to describe consciousness; and (3) some family caregivers might attribute complex mental capacities to patients, which extend beyond the clinical evidence.Conclusion: The beliefs of health care professionals and families regarding disorders of consciousness are complex and could be influenced by broader psychological proclivities to "see minds" in patients who have a liminal neurological status. Awareness of these dynamics may assist health care professionals when interacting with family caregivers.
Collapse
Affiliation(s)
- Andrew Peterson
- Department of Philosophy and Institute for Philosophy and Public Policy, George Mason University, Fairfax, VA, USA
| | - Kristin M Kostick
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | - Katherine A O'Brien
- Disorders of Consciousness Rehabilitation Program, TIRR Memorial Herman, Houston, TX, USA.,Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| | | |
Collapse
|
25
|
Cornwell JFM, Higgins ET. Beyond Value in Moral Phenomenology: The Role of Epistemic and Control Experiences. Front Psychol 2019; 10:2430. [PMID: 31736829 PMCID: PMC6831825 DOI: 10.3389/fpsyg.2019.02430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 10/14/2019] [Indexed: 11/16/2022] Open
Abstract
Many researchers in moral psychology approach the topic of moral judgment in terms of value-assessing outcomes of behaviors as either harmful or helpful, which makes the behaviors wrong or right, respectively. However, recent advances in motivation science suggest that other motives may be at work as well-namely truth (wanting to establish what is real) and control (wanting to manage what happens). In this review, we argue that the epistemic experiences of observers of (im)moral behaviors, and the perceived epistemic experiences of those observed, serve as a groundwork for understanding how truth and control motives are implicated in the moral judgment process. We also discuss relations between this framework and recent work from across the field of moral psychology, as well as implications for future research.
Collapse
Affiliation(s)
- James F. M. Cornwell
- Department of Behavioral Sciences and Leadership, United States Military Academy, West Point, NY, United States
| | - E. Tory Higgins
- Department of Psychology, Columbia University, New York, NY, United States
| |
Collapse
|
26
|
Shank DB, Graves C, Gott A, Gamez P, Rodriguez S. Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence. COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2019.04.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Swiderska A, Küster D. Avatars in Pain: Visible Harm Enhances Mind Perception in Humans and Robots. Perception 2018; 47:1139-1152. [PMID: 30411653 DOI: 10.1177/0301006618809919] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Previous research has shown that when people read vignettes about the infliction of harm upon an entity appearing to have no more than a liminal mind, their attributions of mind to that entity increased. Currently, we investigated if the presence of a facial wound enhanced the perception of mental capacities (experience and agency) in response to images of robotic and human-like avatars, compared with unharmed avatars. The results revealed that harmed versions of both robotic and human-like avatars were imbued with mind to a higher degree, irrespective of the baseline level of mind attributed to their unharmed counterparts. Perceptions of capacity for pain mediated attributions of experience, while both pain and empathy mediated attributions of abilities linked to agency. The findings suggest that harm, even when it appears to have been inflicted unintentionally, may augment mind perception for robotic as well as for nearly human entities, at least as long as it is perceived to elicit pain.
Collapse
|
28
|
Watkins HM, Laham SM. The principle of discrimination: Investigating perceptions of soldiers. GROUP PROCESSES & INTERGROUP RELATIONS 2018. [DOI: 10.1177/1368430218796277] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The principle of discrimination states that soldiers are legitimate targets of violence in war, whereas civilians are not. Is this prescriptive rule reflected in the descriptive judgments of laypeople? In two studies ( Ns = 300, 229), U.S. Mechanical Turk workers were asked to evaluate the character traits of either a soldier or a civilian. Participants also made moral judgments about scenarios in which the target individual (soldier or civilian) killed or was killed by the enemy in war. Soldiers were consistently viewed as more dangerous and more courageous than civilians (Study 1). Participants also viewed killing by (and of) soldiers as more permissible than killing by (and of) civilians, in line with the principle of discrimination (Study 1). Altering the war context to involve a clearly just and unjust side (in Study 2) did not appear to moderate the principle of discrimination in moral judgment, although soldiers and civilians on the just side were evaluated more positively overall. However, the soldiers on the unjust side of the war were not attributed greater courage than were civilians on the unjust side. Theoretical and practical implications of these descriptive findings are discussed.
Collapse
|
29
|
Dehumanization increases instrumental violence, but not moral violence. Proc Natl Acad Sci U S A 2017; 114:8511-8516. [PMID: 28739935 DOI: 10.1073/pnas.1705238114] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Across five experiments, we show that dehumanization-the act of perceiving victims as not completely human-increases instrumental, but not moral, violence. In attitude surveys, ascribing reduced capacities for cognitive, experiential, and emotional states to victims predicted support for practices where victims are harmed to achieve instrumental goals, including sweatshop labor, animal experimentation, and drone strikes that result in civilian casualties, but not practices where harm is perceived as morally righteous, including capital punishment, killing in war, and drone strikes that kill terrorists. In vignette experiments, using dehumanizing compared with humanizing language increased participants' willingness to harm strangers for money, but not participants' willingness to harm strangers for their immoral behavior. Participants also spontaneously dehumanized strangers when they imagined harming them for money, but not when they imagined harming them for their immoral behavior. Finally, participants humanized strangers who were low in humanity if they imagined harming them for immoral behavior, but not money, suggesting that morally motivated perpetrators may humanize victims to justify violence against them. Our findings indicate that dehumanization enables violence that perpetrators see as unethical, but instrumentally beneficial. In contrast, dehumanization does not contribute to moral violence because morally motivated perpetrators wish to harm complete human beings who are capable of deserving blame, experiencing suffering, and understanding its meaning.
Collapse
|
30
|
Abstract
People sometimes perceive a mind in inorganic entities like robots. Psychological research has shown that mind perception correlates with moral judgments and that immoral behaviors (i.e., intentional harm) facilitate mind perception toward otherwise mindless victims. We conducted a vignette experiment (N = 129; Mage = 21.8 ± 6.0 years) concerning human-robot interactions and extended previous research’s results in two ways. First, mind perception toward the robot was facilitated when it received a benevolent behavior, although only when participants took the perspective of an actor. Second, imagining a benevolent interaction led to more positive attitudes toward the robot, and this effect was mediated by mind perception. These results help predict what people’s reactions in future human-robot interactions would be like, and have implications for how to design future social rules about the treatment of robots.
Collapse
|
31
|
|
32
|
Abstract
Existing moral psychology research commonly explains certain phenomena in terms of a motivation to blame. However, this motivation is not measured directly, but rather is inferred from other measures, such as participants' judgments of an agent's blameworthiness. The present paper introduces new methods for assessing this theoretically important motivation, using tools drawn from animal-model research. We test these methods in the context of recent "harm-magnification" research, which shows that people often overestimate the damage caused by intentional (versus unintentional) harms. A preliminary experiment exemplifies this work and also rules out an alternative explanation for earlier harm-magnification results. Exp. 1 asks whether intended harm motivates blame or merely demonstrates the actor's intrinsic blameworthiness. Consistent with a motivational interpretation, participants freely chose blaming, condemning, and punishing over other appealing tasks in an intentional-harm condition, compared with an unintentional-harm condition. Exp. 2 also measures motivation but with converging indicators of persistence (effort, rate, and duration) in blaming. In addition to their methodological contribution, these studies also illuminate people's motivational responses to intentional harms. Perceived intent emerges as catalyzing a motivated social cognitive process related to social prediction and control.
Collapse
|
33
|
Skowron M, Rank S, Świderska A, Küster D, Kappas A. Applying a Text-Based Affective Dialogue System in Psychological Research: Case Studies on the Effects of System Behaviour, Interaction Context and Social Exclusion. Cognit Comput 2014. [DOI: 10.1007/s12559-014-9271-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
34
|
Hamlin JK, Baron AS. Agency attribution in infancy: evidence for a negativity bias. PLoS One 2014; 9:e96112. [PMID: 24801144 PMCID: PMC4011708 DOI: 10.1371/journal.pone.0096112] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2014] [Accepted: 04/02/2014] [Indexed: 11/19/2022] Open
Abstract
Adults tend to attribute agency and intention to the causes of negative outcomes, even if those causes are obviously mechanical. Is this over-attribution of negative agency the result of years of practice with attributing agency to actual conspecifics, or is it a foundational aspect of our agency-detection system, present in the first year of life? Here we present two experiments with 6-month-old infants, in which they attribute agency to a mechanical claw that causes a bad outcome, but not to a claw that causes a good outcome. Control experiments suggest that the attribution stems directly from the negativity of the outcome, rather than from physical cues present in the stimuli. Together, these results provide evidence for striking developmental continuity in the attribution of agency to the causes of negative outcomes.
Collapse
Affiliation(s)
- J. Kiley Hamlin
- Departent of Psychology, The University of British Columbia, Vancouver, Canada
| | - Andrew S. Baron
- Departent of Psychology, The University of British Columbia, Vancouver, Canada
| |
Collapse
|