1
|
Xu W, Li C, Miao X, Liu L. Our tools redefine what it means to be us: perceived robotic agency decreases the importance of agency in humanity. BMC Psychol 2025; 13:380. [PMID: 40229911 PMCID: PMC11998348 DOI: 10.1186/s40359-025-02673-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Accepted: 03/28/2025] [Indexed: 04/16/2025] Open
Abstract
Past work has primarily focused on how the perception of robotic agency influences human-robot interaction and the evaluation of robotic progress, while overlooking its impact on reconsidering what it means to be human. Drawing on social identity theory, we proposed that perceived robotic agency diminishes the importance of agency in humanity. We conducted three experiments (N = 920) to test this assumption. Experiments 1 and 2 manipulated perceived robotic agency. Experiments 2 and 3 separately measured and manipulated distinctiveness threat to investigate the underlying mechanism. Results revealed that high (vs. low) perceived robotic agency reduced ratings of the essentiality of agency in defining humanity (Experiments 1 and 2); distinctiveness threat accounted for this effect (Experiments 2 and 3). The findings contribute to a novel understanding of how ascriptions of humanity are evolving in the AI era.
Collapse
Affiliation(s)
- Weifeng Xu
- Faculty of Psychology, Beijing Key Laboratory of Applied Experimental Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875, China.
| | - Chao Li
- Faculty of Psychology, Beijing Key Laboratory of Applied Experimental Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875, China
| | - Xiaoyan Miao
- Faculty of Psychology, Beijing Key Laboratory of Applied Experimental Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875, China
| | - Li Liu
- Faculty of Psychology, Beijing Key Laboratory of Applied Experimental Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875, China.
| |
Collapse
|
2
|
Grigoreva AD, Rottman J, Tasimi A. When does "no" mean no? Insights from sex robots. Cognition 2024; 244:105687. [PMID: 38154450 DOI: 10.1016/j.cognition.2023.105687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 11/29/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023]
Abstract
Although sexual assault is widely accepted as morally wrong, not all instances of sexual assault are evaluated in the same way. Here, we ask whether different characteristics of victims affect people's moral evaluations of sexual assault perpetrators, and if so, how. We focus on sex robots (i.e., artificially intelligent humanoid social robots designed for sexual gratification) as victims in the present studies because they serve as a clean canvas onto which we can paint different human-like attributes to probe people's moral intuitions regarding sensitive topics. Across four pre-registered experiments conducted with American adults on Prolific (N = 2104), we asked people to evaluate the wrongness of sexual assault against AI-powered robots. People's moral judgments were influenced by the victim's mental capacities (Studies 1 & 2), the victim's interpersonal function (Study 3), the victim's ontological type (Study 4), and the transactional context of the human-robot relationship (Study 4). Overall, by investigating moral reasoning about transgressions against AI robots, we were able to gain unique insights into how people's moral judgments about sexual transgressions can be influenced by victim attributes.
Collapse
Affiliation(s)
| | - Joshua Rottman
- Department of Psychology, Franklin & Marshall College, P.O. Box 3003, Lancaster, PA 17604, USA
| | - Arber Tasimi
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| |
Collapse
|
3
|
Kappas A, Gratch J. These Aren't The Droids You Are Looking for: Promises and Challenges for the Intersection of Affective Science and Robotics/AI. AFFECTIVE SCIENCE 2023; 4:580-585. [PMID: 37744970 PMCID: PMC10514249 DOI: 10.1007/s42761-023-00211-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/19/2023] [Indexed: 09/26/2023]
Abstract
AI research focused on interactions with humans, particularly in the form of robots or virtual agents, has expanded in the last two decades to include concepts related to affective processes. Affective computing is an emerging field that deals with issues such as how the diagnosis of affective states of users can be used to improve such interactions, also with a view to demonstrate affective behavior towards the user. This type of research often is based on two beliefs: (1) artificial emotional intelligence will improve human computer interaction (or more specifically human robot interaction), and (2) we understand the role of affective behavior in human interaction sufficiently to tell artificial systems what to do. However, within affective science the focus of research is often to test a particular assumption, such as "smiles affect liking." Such focus does not provide the information necessary to synthesize affective behavior in long dynamic and real-time interactions. In consequence, theories do not play a large role in the development of artificial affective systems by engineers, but self-learning systems develop their behavior out of large corpora of recorded interactions. The status quo is characterized by measurement issues, theoretical lacunae regarding prevalence and functions of affective behavior in interaction, and underpowered studies that cannot provide the solid empirical foundation for further theoretical developments. This contribution will highlight some of these challenges and point towards next steps to create a rapprochement between engineers and affective scientists with a view to improving theory and solid applications.
Collapse
Affiliation(s)
- Arvid Kappas
- Constructor University, Campus Ring 1, 28759 Bremen, Germany
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
4
|
[Artificial intelligence and ethics in healthcare-balancing act or symbiosis?]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2023; 66:176-183. [PMID: 36650296 PMCID: PMC9892090 DOI: 10.1007/s00103-022-03653-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 12/19/2022] [Indexed: 01/19/2023]
Abstract
Artificial intelligence (AI) is becoming increasingly important in healthcare. This development triggers serious concerns that can be summarized by six major "worst-case scenarios". From AI spreading disinformation and propaganda, to a potential new arms race between major powers, to a possible rule of algorithms ("algocracy") based on biased gatekeeper intelligence, the real dangers of an uncontrolled development of AI are by no means to be underestimated, especially in the health sector. However, fear of AI could cause humanity to miss the opportunity to positively shape the development of our society together with an AI that is friendly to us.Use cases in healthcare play a primary role in this discussion, as both the risks and the opportunities of new AI-based systems become particularly clear here. For example, would older people with dementia (PWD) be allowed to entrust aspects of their autonomy to AI-based assistance systems so that they may continue to independently manage other aspects of their daily lives? In this paper, we argue that the classic balancing act between the dangers and opportunities of AI in healthcare can be at least partially overcome by taking a long-term ethical approach toward a symbiotic relationship between humans and AI. We exemplify this approach by showcasing our I‑CARE system, an AI-based recommendation system for tertiary prevention of dementia. This system has been in development since 2015 as the I‑CARE Project at the University of Bremen, where it is still being researched today.
Collapse
|
5
|
Pauketat JV, Anthis JR. Predicting the moral consideration of artificial intelligences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
6
|
Improving evaluations of advanced robots by depicting them in harmful situations. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
7
|
Cheng M, Li X, Xu J. Promoting Healthcare Workers' Adoption Intention of Artificial-Intelligence-Assisted Diagnosis and Treatment: The Chain Mediation of Social Influence and Human-Computer Trust. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph192013311. [PMID: 36293889 PMCID: PMC9602845 DOI: 10.3390/ijerph192013311] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 10/07/2022] [Accepted: 10/12/2022] [Indexed: 05/24/2023]
Abstract
Artificial intelligence (AI)-assisted diagnosis and treatment could expand the medical scenarios and augment work efficiency and accuracy. However, factors influencing healthcare workers' adoption intention of AI-assisted diagnosis and treatment are not well-understood. This study conducted a cross-sectional study of 343 dental healthcare workers from tertiary hospitals and secondary hospitals in Anhui Province. The obtained data were analyzed using structural equation modeling. The results showed that performance expectancy and effort expectancy were both positively related to healthcare workers' adoption intention of AI-assisted diagnosis and treatment. Social influence and human-computer trust, respectively, mediated the relationship between expectancy (performance expectancy and effort expectancy) and healthcare workers' adoption intention of AI-assisted diagnosis and treatment. Furthermore, social influence and human-computer trust played a chain mediation role between expectancy and healthcare workers' adoption intention of AI-assisted diagnosis and treatment. Our study provided novel insights into the path mechanism of healthcare workers' adoption intention of AI-assisted diagnosis and treatment.
Collapse
|
8
|
Predicting the change trajectory of employee robot-phobia in the workplace: The role of perceived robot advantageousness and anthropomorphism. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
9
|
Diana F, Kawahara M, Saccardi I, Hortensius R, Tanaka A, Kret ME. A Cross-Cultural Comparison on Implicit and Explicit Attitudes Towards Artificial Agents. Int J Soc Robot 2022; 15:1439-1455. [PMID: 37654700 PMCID: PMC10465401 DOI: 10.1007/s12369-022-00917-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots' body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots. Supplementary Information The online version contains supplementary material available at 10.1007/s12369-022-00917-7.
Collapse
Affiliation(s)
- Fabiola Diana
- Comparative Psychology and Affective Neuroscience Lab, Cognitive Psychology Unit, Leiden University, Wassenaarseweg 52, 2333 AK, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| | - Misako Kawahara
- Department of Psychology, Tokyo Woman’s Christian University, 2-6-1 Zempukuji, Suginamiku, Tokyo 167-8585 Japan
| | - Isabella Saccardi
- Department of Information and Computing Sciences, Utrecht University, Princeton Square 5, 3584 CC Utrecht, The Netherlands
| | - Ruud Hortensius
- Department of Psychology, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| | - Akihiro Tanaka
- Department of Psychology, Tokyo Woman’s Christian University, 2-6-1 Zempukuji, Suginamiku, Tokyo 167-8585 Japan
| | - Mariska E. Kret
- Comparative Psychology and Affective Neuroscience Lab, Cognitive Psychology Unit, Leiden University, Wassenaarseweg 52, 2333 AK, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, Albinusdreef 2, 2333 ZA, Leiden, The Netherlands
| |
Collapse
|
10
|
Polakow T, Laban G, Teodorescu A, Busemeyer JR, Gordon G. Social robot advisors: effects of robot judgmental fallacies and context. INTEL SERV ROBOT 2022. [DOI: 10.1007/s11370-022-00438-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Abstract
AbstractThe false attribution of autonomy and related concepts to artificial agents that lack the attributed levels of the respective characteristic is problematic in many ways. In this article, we contrast this view with a positive viewpoint that emphasizes the potential role of such false attributions in the context of robotic language acquisition. By adding emotional displays and congruent body behaviors to a child-like humanoid robot’s behavioral repertoire, we were able to bring naïve human tutors to engage in so called intent interpretations. In developmental psychology, intent interpretations can be hypothesized to play a central role in the acquisition of emotion, volition, and similar autonomy-related words. The aforementioned experiments originally targeted the acquisition of linguistic negation. However, participants produced other affect- and motivation-related words with high frequencies too and, as a consequence, these entered the robot’s active vocabulary. We will analyze participants’ non-negative emotional and volitional speech and contrast it with participants’ speech in a non-affective baseline scenario. Implications of these findings for robotic language acquisition in particular and artificial intelligence and robotics more generally will also be discussed.
Collapse
|
12
|
Savela N, Latikka R, Oksa R, Kortelainen S, Oksanen A. Affective Attitudes Toward Robots at Work: A Population-Wide Four-Wave Survey Study. Int J Soc Robot 2022; 14:1379-1395. [PMID: 35464870 PMCID: PMC9012866 DOI: 10.1007/s12369-022-00877-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2022] [Indexed: 11/29/2022]
Abstract
AbstractRobotization of work is progressing fast globally, and the process has accelerated during the COVID-19 pandemic. Utilizing integrated threat theory as a theoretical framework, this study investigated affective attitudes toward introducing robots at work using a four timepoint data (n = 830) from a Finnish working population longitudinal study. We used hybrid multilevel linear regression modelling to study within and between participant effects over time. Participants were more positive toward introducing robots at work during the COVID-19 pandemic than before it. Increased cynicism toward individuals’ own work, robot-use self-efficacy, and prior user experiences with robots predicted positivity toward introducing robots at work over time. Workers with higher perceived professional efficacy were less and those with higher perceived technology-use productivity, robot-use self-efficacy, and prior user experiences with robots were more positive toward introducing robots at work. In addition, the affective attitudes of men, introverts, critical personalities, workers in science and technology fields, and high-income earners were more positive. Robotization of work life is influenced by workers’ psychological well-being factors and perceived as a welcomed change in the social distancing reality of the pandemic.
Collapse
Affiliation(s)
- Nina Savela
- Tampere University, Kalevantie 4, 33100 Tampere, Finland
| | - Rita Latikka
- Tampere University, Kalevantie 4, 33100 Tampere, Finland
| | - Reetta Oksa
- Tampere University, Kalevantie 4, 33100 Tampere, Finland
| | | | - Atte Oksanen
- Tampere University, Kalevantie 4, 33100 Tampere, Finland
| |
Collapse
|
13
|
Savela N, Oksanen A, Pellert M, Garcia D. Emotional reactions to robot colleagues in a role-playing experiment. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2021. [DOI: 10.1016/j.ijinfomgt.2021.102361] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
14
|
Prosocial behavior toward machines. Curr Opin Psychol 2021; 43:260-265. [PMID: 34481333 DOI: 10.1016/j.copsyc.2021.08.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/02/2021] [Accepted: 08/05/2021] [Indexed: 01/19/2023]
Abstract
Building on the computers are social actors framework, we provide an overview of research demonstrating that humans behave prosocially toward machines. In doing so, we outline that similar motivational and cognitive processes play a role when people act in prosocial ways toward humans and machines. These include perceiving the machine as somewhat human, applying social categories to the machine, being socially influenced by the machine, and experiencing social emotions toward the machine. We conclude that studying prosocial behavior toward machines is important to facilitate proper functioning of human-machine interactions. We further argue that machines provide an interesting yet underutilized resource in the study of prosocial behavior because they are both highly controllable and humanlike.
Collapse
|
15
|
Harris J, Anthis JR. The Moral Consideration of Artificial Entities: A Literature Review. SCIENCE AND ENGINEERING ETHICS 2021; 27:53. [PMID: 34370075 PMCID: PMC8352798 DOI: 10.1007/s11948-021-00331-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/20/2021] [Indexed: 05/09/2023]
Abstract
Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage "information ethics" and "social-relational" approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. This suggests an important gap for psychological, sociological, economic, and organizational research on how artificial entities will be integrated into society and the factors that will determine how the interests of artificial entities are considered.
Collapse
Affiliation(s)
| | - Jacy Reese Anthis
- Department of Sociology, University of Chicago, 1126 East 59th Street, Chicago, IL, 60637, USA
| |
Collapse
|
16
|
Koike M, Loughnan S. Virtual relationships: Anthropomorphism in the digital age. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2021. [DOI: 10.1111/spc3.12603] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Affiliation(s)
- Mayu Koike
- Department of Psychology Hiroshima University Higashihiroshima Japan
| | - Steve Loughnan
- Department of Psychology University of Edinburgh Edinburgh UK
| |
Collapse
|
17
|
Malinowska JK. Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00787-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
AbstractThis paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of the most important among them is a sense of group membership with robots, as it modulates the empathic responses to representatives of our- and other- groups. The sense of group membership with robots may be co-shaped by socio-cognitive factors such as one’s experience, familiarity with the robot and its history, motivation, accepted ontology, stereotypes or language. Finally, I argue in favour of the formulation of a pragmatic and normative framework for manipulations in the level of empathy in human–robot interactions.
Collapse
|