1
|
Liu D, Zhang J, Qi J. Exploring customers' reuse intention to robots under different service failures: A mind perception perspective. Acta Psychol (Amst) 2025; 256:105030. [PMID: 40286347 DOI: 10.1016/j.actpsy.2025.105030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2025] [Revised: 04/15/2025] [Accepted: 04/16/2025] [Indexed: 04/29/2025] Open
Abstract
Artificial intelligence-based service robot failures (hereafter referred to as robot service failures) are inevitable in service practice, making the mitigation of their adverse effects a critical concern for service managers. The present paper investigates the unique classification of robot service failures with the help of mind perception theory and a consumer-centered perspective. Moreover, we further examine the impact of robot service failures on consumer behavioral responses (i.e., reuse intention), the mediating role of negative emotions, and the moderating effect of service robot anthropomorphism. Using a mixed-methods approach, Study 1, based on robot service failure reviews from Ctrip and word co-occurrence network analysis, reveals a two-dimensional classification of robot service failures: agential failures and experiential failures. Furthermore, leveraging the same dataset, Study 2 calculates negative emotions in the text and uses consumer evaluations as a proxy for reuse intention. The results indicate that agential failures (compared to experiential failures) exert a more significant negative impact on consumers, and this relationship is mediated by negative emotions. Study 3 employs a behavioral experiment to further validate the findings of Study 2 and additionally reveals that service robot anthropomorphism moderates the relationship between service failures, negative emotions, and reuse intention, leading to more adverse consequences for experiential failures. This paper makes a valuable contribution to the emerging literature on robot service failures by exploring the distinctiveness of robot services. To the best of our knowledge, this is the pioneering empirical study that explores the unique dimensions of robot service failures. Practically, the findings provide actionable insights. Understanding the classification of robot service failures, which differs from human service failures, allows for a deeper comprehension of AI-powered services and offers effective intervention strategies for consumer recovery following service failures.
Collapse
Affiliation(s)
- Dewen Liu
- School of Management, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China.
| | - Jieqiong Zhang
- School of Busines, Tianjin University of Finance and Economics, Tianjin, China.
| | - Jiali Qi
- School of Business, Stevens Institute of Technology, Hoboken, NJ, USA.
| |
Collapse
|
2
|
Salatino A, Prével A, Caspar E, Bue SL. Influence of AI behavior on human moral decisions, agency, and responsibility. Sci Rep 2025; 15:12329. [PMID: 40210678 PMCID: PMC11986005 DOI: 10.1038/s41598-025-95587-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 03/21/2025] [Indexed: 04/12/2025] Open
Abstract
There is a growing interest in understanding the effects of human-machine interaction on moral decision-making (Moral-DM) and sense of agency (SoA). Here, we investigated whether the "moral behavior" of an AI may affect both moral-DM and SoA in a military population, by using a task in which cadets played the role of drone operators on a battlefield. Participants had to decide whether or not to initiate an attack based on the presence of enemies and the risk of collateral damage. By combining three different types of trials (Moral vs. two No-Morals) in three blocks with three type of intelligent system support (No-AI support vs. Aggressive-AI vs. Conservative-AI), we showed that participants' decisions in the morally challenging situations were influenced by the inputs provided by the autonomous system. Furthermore, by measuring implicit and explicit agency, we found a significant increase in the SoA at the implicit level in the morally challenging situations, and a decrease in the explicit responsibility during the interaction with both AIs. These results suggest that the AI behavior influences human moral decision-making and alters the sense of agency and responsibility in ethical scenarios. These findings have implications for the design of AI-assisted decision-making processes in moral contexts.
Collapse
Affiliation(s)
- Adriana Salatino
- Department of Life Sciences, Royal Military Academy, Brussels, Belgium.
| | - Arthur Prével
- University of Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Lille, France
| | - Emilie Caspar
- The Moral & Social Brain Lab, Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Salvatore Lo Bue
- Department of Life Sciences, Royal Military Academy, Brussels, Belgium
| |
Collapse
|
3
|
Liu Y, Rau PLP. Do you feel betrayed? Exploring the impact of workplace-induced loneliness on interactions with varied social structures. Work 2025:10519815241298526. [PMID: 39973735 DOI: 10.1177/10519815241298526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2025] Open
Abstract
BACKGROUND Workplace loneliness is an escalating concern, affecting employee well-being and productivity. Understanding its impact on social interactions and decision-making within professional settings is crucial for developing effective interventions. OBJECTIVE This study aims to explore how workplace-induced loneliness influences individuals' interactions with social groups, individuals, and computer programs, and to assess the behavioral, cognitive, and emotional outcomes of these interactions. To explain these observed phenomena, the Workplace Loneliness-Driven Social Response (WL-SR) model is proposed. METHODS A dark factory decision-making experiment was designed and conducted, where participants underwent loneliness induction before engaging in tasks that required interactions with different social structures. The study measured changes in trust, emotional responses, neural activities, and decision-making processes to evaluate the impact of loneliness. RESULTS The findings indicate that loneliness significantly increases distrust and dishonesty in interactions with social groups, leading to higher dissatisfaction and negative emotional responses. Conversely, interactions with a social individual were marked by increased reliability and more positive attributions, which mitigated feelings of loneliness. The WL-SR model, integrating stress-related fight-or-flight and tend-and-befriend responses, elucidates these outcomes. CONCLUSIONS This study reveals how workplace loneliness affects trust and social interactions in professional settings. It highlights the negative impact on group interactions and the potential for individual interactions to reduce loneliness. The findings contribute to the understanding of how human psychology interacts with digital communication in the workplace, emphasizing the role of computers in mediating responses to loneliness.
Collapse
Affiliation(s)
- Yankuan Liu
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | | |
Collapse
|
4
|
Jacobs OL, Pazhoohi F, Kingstone A. Large language models have divergent effects on self-perceptions of mind and the attributes considered uniquely human. Conscious Cogn 2024; 124:103733. [PMID: 39116598 DOI: 10.1016/j.concog.2024.103733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 07/16/2024] [Accepted: 07/25/2024] [Indexed: 08/10/2024]
Abstract
The rise of powerful Large Language Models (LLMs) provides a compelling opportunity to investigate the consequences of anthropomorphism, particularly regarding how their exposure may influence the way individuals view themselves (self-perception) and other people (other-perception). Using a mind perception framework, we examined attributions of agency (the ability to do) and experience (the ability to feel). Participants evaluated their agentic and experiential capabilities and the extent to which these features are uniquely human before and after exposure to LLM responses. Post-exposure, participants increased evaluations of their agentic and experiential qualities while decreasing their perception that agency and experience are considered to be uniquely human. These results indicate that anthropomorphizing LLMs impacts attributions of mind for humans in fundamentally divergent ways: enhancing the perception of one's own mind while reducing its uniqueness for others. These results open up a range of future questions regarding how anthropomorphism can affect mind perception toward humans.
Collapse
Affiliation(s)
- Oliver L Jacobs
- Department of Psychology, University of British Columbia, Canada.
| | - Farid Pazhoohi
- School of Psychology, University of Plymouth, United Kingdom
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Canada
| |
Collapse
|
5
|
Oliveira M, Brands J, Mashudi J, Liefooghe B, Hortensius R. Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots. Cogn Res Princ Implic 2024; 9:47. [PMID: 39019988 PMCID: PMC11255178 DOI: 10.1186/s41235-024-00573-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 07/02/2024] [Indexed: 07/19/2024] Open
Abstract
This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.
Collapse
Affiliation(s)
- Manuel Oliveira
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Justus Brands
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Judith Mashudi
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Baptist Liefooghe
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Ruud Hortensius
- Department of Psychology, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
6
|
Shields JD, Howells R, Lamont G, Leilei Y, Madin A, Reimann CE, Rezaei H, Reuillon T, Smith B, Thomson C, Zheng Y, Ziegler RE. AiZynth impact on medicinal chemistry practice at AstraZeneca. RSC Med Chem 2024; 15:1085-1095. [PMID: 38665822 PMCID: PMC11042116 DOI: 10.1039/d3md00651d] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 02/15/2024] [Indexed: 04/28/2024] Open
Abstract
AstraZeneca chemists have been using the AI retrosynthesis tool AiZynth for three years. In this article, we present seven examples of how medicinal chemists using AiZynth positively impacted their drug discovery programmes. These programmes run the gamut from early-stage hit confirmation to late-stage route optimisation efforts. We also discuss the different use cases for which AI retrosynthesis tools are best suited.
Collapse
Affiliation(s)
- Jason D Shields
- Early Oncology R&D, AstraZeneca 35 Gatehouse Drive Waltham MA 02451 USA
| | - Rachel Howells
- Early Oncology R&D, AstraZeneca 1 Francis Crick Avenue Cambridge CB2 0AA UK
| | - Gillian Lamont
- Early Oncology R&D, AstraZeneca 1 Francis Crick Avenue Cambridge CB2 0AA UK
| | - Yin Leilei
- Pharmaron Beijing Co., Ltd. 6 Taihe Road BDA Beijing 100176 P.R. China
| | - Andrew Madin
- Discovery Sciences, AstraZeneca 1 Francis Crick Avenue Cambridge CB2 0AA UK
| | | | - Hadi Rezaei
- Early Oncology R&D, AstraZeneca 35 Gatehouse Drive Waltham MA 02451 USA
| | - Tristan Reuillon
- Respiratory & Immunology, BioPharmaceuticals R&D, AstraZeneca Pepparedsleden 1 43183 Mölndal Sweden
| | - Bryony Smith
- Early Oncology R&D, AstraZeneca 1 Francis Crick Avenue Cambridge CB2 0AA UK
| | - Clare Thomson
- Early Oncology R&D, AstraZeneca 1 Francis Crick Avenue Cambridge CB2 0AA UK
| | - Yuting Zheng
- Pharmaron Beijing Co., Ltd. 6 Taihe Road BDA Beijing 100176 P.R. China
| | - Robert E Ziegler
- Early Oncology R&D, AstraZeneca 35 Gatehouse Drive Waltham MA 02451 USA
| |
Collapse
|
7
|
Krpan D, Booth JE, Damien A. The positive-negative-competence (PNC) model of psychological responses to representations of robots. Nat Hum Behav 2023; 7:1933-1954. [PMID: 37783891 PMCID: PMC10663151 DOI: 10.1038/s41562-023-01705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 08/25/2023] [Indexed: 10/04/2023]
Abstract
Robots are becoming an increasingly prominent part of society. Despite their growing importance, there exists no overarching model that synthesizes people's psychological reactions to robots and identifies what factors shape them. To address this, we created a taxonomy of affective, cognitive and behavioural processes in response to a comprehensive stimulus sample depicting robots from 28 domains of human activity (for example, education, hospitality and industry) and examined its individual difference predictors. Across seven studies that tested 9,274 UK and US participants recruited via online panels, we used a data-driven approach combining qualitative and quantitative techniques to develop the positive-negative-competence model, which categorizes all psychological processes in response to the stimulus sample into three dimensions: positive, negative and competence-related. We also established the main individual difference predictors of these dimensions and examined the mechanisms for each predictor. Overall, this research provides an in-depth understanding of psychological functioning regarding representations of robots.
Collapse
Affiliation(s)
- Dario Krpan
- Department of Psychological and Behavioural Science, London School of Economics and Political Science, London, UK.
| | - Jonathan E Booth
- Department of Management, London School of Economics and Political Science, London, UK
| | - Andreea Damien
- Department of Psychological and Behavioural Science, London School of Economics and Political Science, London, UK
| |
Collapse
|
8
|
Arora A, Gupta S, Devi C, Walia N. Customer experiences in the era of artificial intelligence (AI) in context to FinTech: a fuzzy AHP approach. BENCHMARKING-AN INTERNATIONAL JOURNAL 2023. [DOI: 10.1108/bij-10-2021-0621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
PurposeThe financial technology (FinTech) era has brought a revolutionary change in the financial sector’s customer experiences at the national and global levels. The importance of artificial intelligence (AI) in the context of FinTech services for enriching customer experiences has become a new norm in this modern era of technological advancement. So, it becomes crucial to understand the customer’s perspective. The current research ranks the factors and sub-factors influencing customers’ perceptions of AI-based FinTech services.Design/methodology/approachThe sample size for this study was decided to be 970 respondents from four Indian cities: Mumbai, Delhi, Kolkata and Chennai. The Fuzzy-AHP technique was used to identify the primary factors and sub-factors influencing customers’ experiences with AI-enabled finance services. The factors considered in the study were service quality, trust commitment, personalization, perceived convenience, relationship commitment, perceived sacrifice, subjective norms, perceived usefulness, attitude and vulnerability. The current research is both empirical and descriptive.FindingsThe study’s three top factors are service quality, perceived usefulness and perceived convenience, all of which have a significant impact on customers’ experience with AI-enabled FinTech services discussing sub-criteria three primary criteria for customers’ experience for FinTech services include: “Using FinTech would increase my effectiveness in managing a portfolio (A2)”, “My peer groups and friends have an impact on using FinTech services (SN3)” and “Using FinTech would increase my efficacy in administering portfolio (PU2)”.Research limitations/implicationsThe current study is limited to four Indian cities, with 10 factors to understand customers’ preferences in FinTech. Further research can focus on other dimensions like perceived ease of use, familiarity, etc. Future studies can have a broader view of different geographical locations and consider new tech to understand customer perceptions better.Practical implicationsThe study’s findings will significantly assist businesses in determining the primary aspects influencing customers’ experiences with AI-enabled financial services. As a result, they will develop strategies and policies to entice clients to use AI-powered FinTech services.Originality/valueExisting AI research investigated several vital topics in the context of FinTech services. On the other hand, the current study ranked the criteria in understanding customer experiences. The research will substantially assist marketers, business houses, academicians and practitioners in understanding essential facets influencing customer experience and contribute significantly to the literature.
Collapse
|
9
|
Kautish P, Khare A. Investigating the moderating role of AI-enabled services on flow and awe experience. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102519] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
10
|
Shape of the Uncanny Valley and Emotional Attitudes Toward Robots Assessed by an Analysis of YouTube Comments. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00905-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractThe uncanny valley hypothesis (UVH) suggests that almost, but not fully, humanlike artificial characters elicit a feeling of eeriness or discomfort in observers. This study used Natural Language Processing of YouTube comments to provide ecologically-valid, non-laboratory results about people’s emotional reactions toward robots. It contains analyses of 224,544 comments from 1515 videos showing robots from a wide humanlikeness spectrum. The humanlikeness scores were acquired from the Anthropomorphic roBOT database. The analysis showed that people use words related to eeriness to describe very humanlike robots. Humanlikeness was linearly related to both general sentiment and perceptions of eeriness—-more humanlike robots elicit more negative emotions. One of the subscales of humanlikeness, Facial Features, showed a UVH-like relationship with both sentiment and eeriness. The exploratory analysis demonstrated that the most suitable words for measuring the self-reported uncanny valley effect are: ‘scary’ and ‘creepy’. In contrast to theoretical expectations, the results showed that humanlikeness was not related to either pleasantness or attractiveness. Finally, it was also found that the size of robots influences sentiment toward the robots. According to the analysis, the reason behind this is the perception of smaller robots as more playable (as toys), although the prediction that bigger robots would be perceived as more threatening was not supported.
Collapse
|
11
|
Trust as a second-order construct: Investigating the relationship between consumers and virtual agents. TELEMATICS AND INFORMATICS 2022. [DOI: 10.1016/j.tele.2022.101811] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
12
|
Feier T, Gogoll J, Uhl M. Hiding Behind Machines: Artificial Agents May Help to Evade Punishment. SCIENCE AND ENGINEERING ETHICS 2022; 28:19. [PMID: 35377086 PMCID: PMC8979930 DOI: 10.1007/s11948-022-00372-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 02/28/2022] [Indexed: 05/08/2023]
Abstract
The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions.
Collapse
Affiliation(s)
- Till Feier
- TUM School of Governance, TU Munich, Richard-Wagner-Straße 1, 80333 Munich, Germany
| | - Jan Gogoll
- Bavarian Institute for Digital Transformation, TU Munich, Gabelsbergerstr. 4, 80333 Munich, Germany
| | - Matthias Uhl
- Faculty of Computer Science, Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany
| |
Collapse
|
13
|
|
14
|
Budhwar P, Malik A, De Silva MTT, Thevisuthan P. Artificial intelligence – challenges and opportunities for international HRM: a review and research agenda. INTERNATIONAL JOURNAL OF HUMAN RESOURCE MANAGEMENT 2022. [DOI: 10.1080/09585192.2022.2035161] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
- Pawan Budhwar
- Aston Business School, Aston University, Birmingham, UK
| | - Ashish Malik
- UoN Central Coast Business School, University of Newcastle Australia, Ourimbah, NSW, Australia
| | | | | |
Collapse
|
15
|
Park J, Woo SE. Who Likes Artificial Intelligence? Personality Predictors of Attitudes toward Artificial Intelligence. THE JOURNAL OF PSYCHOLOGY 2022; 156:68-94. [PMID: 35015615 DOI: 10.1080/00223980.2021.2012109] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
We examined how individuals' personality relates to various attitudes toward artificial intelligence (AI). Attitudes were organized into two dimensions of affective components (positive and negative emotions) and two dimensions of cognitive components (sociality and functionality). For personality, we focused on the Big Five personality traits (extraversion, agreeableness, conscientiousness, neuroticism, openness) and personal innovativeness in information technology. Based on a survey of 1,530 South Korean adults, we found that extraversion was related to negative emotions and low functionality. Agreeableness was associated with both positive and negative emotions, and it was positively associated with sociality and functionality. Conscientiousness was negatively related to negative emotions, and it was associated with high functionality, but also with low sociality. Neuroticism was related to negative emotions, but also to high sociality. Openness was positively linked to functionality, but did not predict other attitudes when other proximal predictors were included (e.g. prior use, personal innovativeness). Personal innovativeness in information technology consistently showed positive attitudes toward AI across all four dimensions. These findings provide mixed support for our hypotheses, and we discuss specific implications for future research and practice.
Collapse
|
16
|
|
17
|
Li W, Zhou X, Yang Q. Designing medical artificial intelligence for in- and out-groups. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106929] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
18
|
AI-enabled digital identity – inputs for stakeholders and policymakers. JOURNAL OF SCIENCE AND TECHNOLOGY POLICY MANAGEMENT 2021. [DOI: 10.1108/jstpm-09-2020-0134] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Purpose
This conceptual article’s primary aim is to identify the significant stakeholders of the digital identity system (DIS) and then highlight the impact of artificial intelligence (AI) on each of the identified stakeholders. It also recommends vital points that could be considered by policymakers while developing technology-related policies for effective DIS.
Design/methodology/approach
This article uses stakeholder methodology and design theory (DT) as a primary theoretical lens along with the innovation diffusion theory (IDT) as a sub-theory. This article is based on the analysis of existing literature that mainly comprises academic literature, official reports, white papers and publicly available domain experts’ interviews.
Findings
The study identified six significant stakeholders, i.e. government, citizens, infrastructure providers, identity providers (IdP), judiciary and relying parties (RPs) of the DIS from the secondary data. Also, the role of IdP becomes insignificant in the context of AI-enabled digital identity systems (AIeDIS). The findings depict that AIeDIS can positively impact the DIS stakeholders by solving a range of problems such as identity theft, unauthorised access and credential misuse, and will also open a possibility of new ways to empower all the stakeholders.
Research limitations/implications
The study is based on secondary data and has considered DIS stakeholders from a generic perspective. Incorporating expert opinion and empirical validation of the hypothesis could derive more specific and context-aware insights.
Practical implications
The study could facilitate stakeholders to enrich further their understanding and significance of developing sustainable and future-ready DIS by highlighting the impact of AI on the digital identity ecosystem.
Originality/value
To the best of the authors’ knowledge, this article is the first of its kind that has used stakeholder theory, DT and IDT to explain the design and developmental phenomenon of AIeDIS. A list of six significant stakeholders of DIS, i.e. government, citizens, infrastructure providers, IdP, judiciary and RP, is identified through comprehensive literature analysis.
Collapse
|
19
|
Ameen N, Hosany S, Tarhini A. Consumer interaction with cutting-edge technologies: Implications for future research. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106761] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
|
21
|
Li X, Sung Y. Anthropomorphism brings us closer: The mediating role of psychological distance in User–AI assistant interactions. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106680] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
22
|
Koteluk O, Wartecki A, Mazurek S, Kołodziejczak I, Mackiewicz A. How Do Machines Learn? Artificial Intelligence as a New Era in Medicine. J Pers Med 2021; 11:jpm11010032. [PMID: 33430240 PMCID: PMC7825660 DOI: 10.3390/jpm11010032] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/31/2020] [Accepted: 01/05/2021] [Indexed: 02/06/2023] Open
Abstract
With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool.
Collapse
Affiliation(s)
- Oliwia Koteluk
- Faculty of Medical Sciences, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland; (O.K.); (A.W.)
| | - Adrian Wartecki
- Faculty of Medical Sciences, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland; (O.K.); (A.W.)
| | - Sylwia Mazurek
- Department of Cancer Immunology, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland;
- Department of Cancer Diagnostics and Immunology, Greater Poland Cancer Centre, 61-866 Poznan, Poland
- Correspondence: ; Tel.: +48-61-885-06-67
| | - Iga Kołodziejczak
- Postgraduate School of Molecular Medicine, Medical University of Warsaw, 02-091 Warsaw, Poland;
| | - Andrzej Mackiewicz
- Department of Cancer Immunology, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland;
- Department of Cancer Diagnostics and Immunology, Greater Poland Cancer Centre, 61-866 Poznan, Poland
| |
Collapse
|
23
|
Shank DB, Bowen M, Burns A, Dew M. Humans are perceived as better, but weaker, than artificial intelligence: A comparison of affective impressions of humans, AIs, and computer systems in roles on teams. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100092] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
24
|
Ameen N, Tarhini A, Reppel A, Anand A. Customer experiences in the age of artificial intelligence. COMPUTERS IN HUMAN BEHAVIOR 2020; 114:106548. [PMID: 32905175 PMCID: PMC7463275 DOI: 10.1016/j.chb.2020.106548] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 08/27/2020] [Accepted: 08/31/2020] [Indexed: 11/17/2022]
Abstract
Artificial intelligence (AI) is revolutionising the way customers interact with brands. There is a lack of empirical research into AI-enabled customer experiences. Hence, this study aims to analyse how the integration of AI in shopping can lead to an improved AI-enabled customer experience. We propose a theoretical model drawing on the trust-commitment theory and service quality model. An online survey was distributed to customers who have used an AI- enabled service offered by a beauty brand. A total of 434 responses were analysed using partial least squares-structural equation modelling. The findings indicate the significant role of trust and perceived sacrifice as factors mediating the effects of perceived convenience, personalisation and AI-enabled service quality. The findings also reveal the significant effect of relationship commitment on AI-enabled customer experience. This study contributes to the existing literature by revealing the mediating effects of trust and perceived sacrifice and the direct effect of relationship commitment on AI-enabled customer experience. In addition, the study has practical implications for retailers deploying AI in services offered to their customers. Artificial intelligence (AI) is changing customer experiences. The study develops a model drawing on trust-commitment theory and service quality model. Trust and perceived sacrifice play significant mediating role. Relationship commitment has a significant effect on AI-enabled customer experience. Perceived convenience and personalisation play a significant role in AI-enabled experiences.
Collapse
Affiliation(s)
- Nisreen Ameen
- School of Business and Management, Royal Holloway, University of London, London, United Kingdom
| | - Ali Tarhini
- Department of Information Systems, Sultan Qaboos University, Muscat, Oman
| | - Alexander Reppel
- School of Business and Management, Royal Holloway, University of London, London, United Kingdom
| | - Amitabh Anand
- SKEMA Business School, Université Côte d'Azur, GREDEG, France
| |
Collapse
|
25
|
A Systematic Content Review of Artificial Intelligence and the Internet of Things Applications in Smart Home. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093074] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This article reviewed the state-of-the-art applications of the Internet of things (IoT) technology applied in homes for making them smart, automated, and digitalized in many respects. The literature presented various applications, systems, or methods and reported the results of using IoT, artificial intelligence (AI), and geographic information system (GIS) at homes. Because the technology has been advancing and users are experiencing IoT boom for smart built environment applications, especially smart homes and smart energy systems, it is necessary to identify the gaps, relation between current methods, and provide a coherent instruction of the whole process of designing smart homes. This article reviewed relevant papers within databases, such as Scopus, including journal papers published in between 2010 and 2019. These papers were then analyzed in terms of bibliography and content to identify more related systems, practices, and contributors. A designed systematic review method was used to identify and select the relevant papers, which were then reviewed for their content by means of coding. The presented systematic critical review focuses on systems developed and technologies used for smart homes. The main question is ”What has been learned from a decade trailing smart system developments in different fields?”. We found that there is a considerable gap in the integration of AI and IoT and the use of geospatial data in smart home development. It was also found that there is a large gap in the literature in terms of limited integrated systems for energy efficiency and aged care system development. This article would enable researchers and professionals to fully understand those gaps in IoT-based environments and suggest ways to fill the gaps while designing smart homes where users have a higher level of thermal comfort while saving energy and greenhouse gas emissions. This article also raised new challenging questions on how IoT and existing developed systems could be improved and be further developed to address other issues of energy saving, which can steer the research direction to full smart systems. This would significantly help to design fully automated assistive systems to improve quality of life and decrease energy consumption.
Collapse
|
26
|
Shank DB, Gott A. People's self-reported encounters of Perceiving Mind in Artificial Intelligence. Data Brief 2019; 25:104220. [PMID: 31367659 PMCID: PMC6646919 DOI: 10.1016/j.dib.2019.104220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 06/03/2019] [Accepted: 06/28/2019] [Indexed: 11/29/2022] Open
Abstract
This article presents the data from two surveys that asked about everyday encounters with artificial intelligence (AI) systems that are perceived to have attributes of mind. In response to specific attribute prompts about an AI, the participants qualitatively described a personally-known encounter with an AI. In survey 1 the prompts asked about an AI planning, having memory, controlling resources, or doing something surprising. In survey 2 the prompts asked about an AI experiencing emotion, expressing desires or beliefs, having human-like physical features, or being mistaken for a human. The original responses were culled based on the ratings of multiple coders to eliminate responses that did not adhere to the prompts. This article includes the qualitative responses, coded categories of those qualitative responses, quantitative measures of mind perception and demographics. For interpretation of this data related to people's emotions, see Feeling our Way to Machine Minds: People's Emotions when Perceiving Mind in Artificial Intelligence Shank et al., 2019.
Collapse
Affiliation(s)
- Daniel B Shank
- Department of Psychological Science, Missouri University of Science and Technology, 500 W. 14th Street, Rolla, MO 65409, USA
| | - Alexander Gott
- Department of Psychological Science, Missouri University of Science and Technology, 500 W. 14th Street, Rolla, MO 65409, USA
| |
Collapse
|