1
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Novel Modularization Design and Intelligent Control of a Multifunctional and Flexible Baby Chair. ACTUATORS 2022. [DOI: 10.3390/act11070186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The design and control of baby chairs have attracted great interest due to children’s increasing consumption market. As a human-robot interface, the features of baby chairs, such as their flexibility, comfortableness, safety, etc., are important factors that should be considered. Therefore, in this paper, to provide competent assistance to parents in taking care of their children, we propose a novel design and control scheme for improving children’s living goods and easing parents’ burden. Firstly, a novel modularization design method is introduced to redesign the shape and structure of the baby chair to cater to multifunctional demands. Flexible materials are chosen to adapt to different body shapes for the sake of safety and comfortableness. Moreover, a Cartesian impedance controller enhanced by a radial basis function neural network (RBFNN) is proposed to achieve a safe, smooth and accurate control of the baby chair with children sitting on it in various uncertain situations using integrated actuators. Both target posture control and periodic control of the chair are implemented to meet different practical requirements. The feasibility of both the chair design and its control is verified in the MATLAB simulation environment through reference tracking tasks. The experimental results demonstrate that our controller can achieve satisfactory performance by controlling the position error in a reasonable range and keeping the manipulation stable and smooth. With the increasing demand for baby chairs in the global children’s consumption market, we believe that the methodology proposed in this paper will attract more research and industry interest.
Collapse
|
3
|
Claudy MC, Aquino K, Graso M. Artificial Intelligence Can't Be Charmed: The Effects of Impartiality on Laypeople's Algorithmic Preferences. Front Psychol 2022; 13:898027. [PMID: 35846643 PMCID: PMC9277554 DOI: 10.3389/fpsyg.2022.898027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Over the coming years, AI could increasingly replace humans for making complex decisions because of the promise it holds for standardizing and debiasing decision-making procedures. Despite intense debates regarding algorithmic fairness, little research has examined how laypeople react when resource-allocation decisions are turned over to AI. We address this question by examining the role of perceived impartiality as a factor that can influence the acceptance of AI as a replacement for human decision-makers. We posit that laypeople attribute greater impartiality to AI than human decision-makers. Our investigation shows that people value impartiality in decision procedures that concern the allocation of scarce resources and that people perceive AI as more capable of impartiality than humans. Yet, paradoxically, laypeople prefer human decision-makers in allocation decisions. This preference reverses when potential human biases are made salient. The findings highlight the importance of impartiality in AI and thus hold implications for the design of policy measures.
Collapse
Affiliation(s)
| | - Karl Aquino
- Sauder School of Business, University of British Columbia, Vancouver, BC, Canada
| | - Maja Graso
- Department of Management, University of Otago, Dunedin, New Zealand
| |
Collapse
|
4
|
Gratch J, Fast NJ. The power to harm: AI assistants pave the way to unethical behavior. Curr Opin Psychol 2022; 47:101382. [PMID: 35830764 DOI: 10.1016/j.copsyc.2022.101382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 05/20/2022] [Accepted: 06/01/2022] [Indexed: 11/03/2022]
Abstract
Advances in artificial intelligence (AI) enable new ways of exercising and experiencing power by automating interpersonal tasks such as interviewing and hiring workers, managing and evaluating work, setting compensation, and negotiating deals. As these techniques become more sophisticated, they increasingly support personalization where users can "tell" their AI assistants not only what to do, but how to do it: in effect, dictating the ethical values that govern the assistant's behavior. Importantly, these new forms of power could bypass existing social and regulatory checks on unethical behavior by introducing a new agent into the equation. Organization research suggests that acting through human agents (i.e., the problem of indirect agency) can undermine ethical forecasting such that actors believe they are acting ethically, yet a) show less benevolence for the recipients of their power, b) receive less blame for ethical lapses, and c) anticipate less retribution for unethical behavior. We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents. We conclude by examining boundary conditions and discussing potential directions for future research.
Collapse
|
5
|
Awad E, Levine S, Anderson M, Anderson SL, Conitzer V, Crockett MJ, Everett JAC, Evgeniou T, Gopnik A, Jamison JC, Kim TW, Liao SM, Meyer MN, Mikhail J, Opoku-Agyemang K, Borg JS, Schroeder J, Sinnott-Armstrong W, Slavkovik M, Tenenbaum JB. Computational ethics. Trends Cogn Sci 2022; 26:388-405. [PMID: 35365430 DOI: 10.1016/j.tics.2022.02.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/14/2022] [Accepted: 02/25/2022] [Indexed: 12/11/2022]
Abstract
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
Collapse
Affiliation(s)
- Edmond Awad
- Department of Economics, University of Exeter, Exeter, UK; Institute for Data Science and AI, University of Exeter, Exeter, UK; Center for Humans and Machines, Max-Planck Institute for Human Development, Berlin, Germany.
| | - Sydney Levine
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA; Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Michael Anderson
- Department of Computer Science, University of Hartford, West Hartford, CT, USA
| | | | - Vincent Conitzer
- Department of Computer Science, Duke University, Durham, NC, USA; Department of Economics, Duke University, Durham, NC, USA; Department of Philosophy, Duke University, Durham, NC, USA; Institute for Ethics in AI, University of Oxford, Oxford, UK
| | - M J Crockett
- Department of Psychology, Yale University, New Haven, CT, USA
| | | | | | - Alison Gopnik
- Department of Psychology, University of California, Berkeley, CA, USA
| | - Julian C Jamison
- Department of Economics, University of Exeter, Exeter, UK; Global Priorities Institute, Oxford University, Oxford, UK
| | - Tae Wan Kim
- Ethics Group, Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA
| | - S Matthew Liao
- Center for Bioethics, New York University, New York, NY, USA
| | - Michelle N Meyer
- Center for Translational Bioethics and Health Care Policy, Geisinger Health System, Danville, PA, USA; Steele Institute for Health Innovation, Geisinger Health System, Danville, PA, USA; Geisinger Commonwealth School of Medicine, Scranton, PA, USA
| | - John Mikhail
- Georgetown University Law Center, Washington, DC, USA
| | - Kweku Opoku-Agyemang
- International Growth Centre, London School of Economics, London, UK; Machine Learning X Doing, Toronto, ON, Canada; Development Economics X, Toronto, ON, Canada
| | - Jana Schaich Borg
- Social Science Research Institute, Duke University, Durham, NC, USA; Duke Institute for Brain Sciences, Duke University, Durham, NC, USA
| | - Juliana Schroeder
- Haas School of Business, University of California, Berkeley, CA, USA
| | - Walter Sinnott-Armstrong
- Department of Philosophy, Duke University, Durham, NC, USA; Duke Institute for Brain Sciences, Duke University, Durham, NC, USA; Kenan Institute for Ethics, Duke University, Durham, NC, USA
| | - Marija Slavkovik
- Department of Information Science and Media Studies, University of Bergen, Bergen, Norway
| | - Josh B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA; Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| |
Collapse
|
7
|
Bag S, Srivastava G, Bashir MMA, Kumari S, Giannakis M, Chowdhury AH. Journey of customers in this digital era: Understanding the role of artificial intelligence technologies in user engagement and conversion. BENCHMARKING-AN INTERNATIONAL JOURNAL 2021. [DOI: 10.1108/bij-07-2021-0415] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The first research objective is to understand the role of digital [artificial intelligence (AI)] technologies on user engagement and conversion that has resulted in high online activities and increased online sales in current times in India. In addition, combined with changes such as social distancing and lockdown due to the COVID-19 pandemic, digital disruption has largely impacted the old ways of communication both at the individual and organizational levels, ultimately resulting in prominent social change. While interacting in the virtual world, this change is more noticeable. Therefore, the second research objective is to examine if a satisfying experience during online shopping leads to repurchase intention.
Design/methodology/approach
Using primary data collected from consumers in a developing economy (India), we tested the theoretical model to further extend the theoretical debate in consumer research.
Findings
This study empirically tests and further establishes that deploying AI technologies have a positive relationship with user engagement and conversion. Further, conversion leads to satisfying user experience. Finally, the relationship between satisfying user experience and repurchase intention is also found to be significant.
Originality/value
The uniqueness of this study is that it tests few key relationships related to user engagement during this uncertain period (COVID-19 pandemic) and examines the underlying mechanism which leads to increase in online sales.
Collapse
|