1
|
Bendell R, Williams J, Fiore SM, Jentsch F. Artificial social intelligence in teamwork: how team traits influence human-AI dynamics in complex tasks. Front Robot AI 2025; 12:1487883. [PMID: 40034799 PMCID: PMC11873349 DOI: 10.3389/frobt.2025.1487883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Accepted: 01/08/2025] [Indexed: 03/05/2025] Open
Abstract
This study examines the integration of Artificial Social Intelligence (ASI) into human teams, focusing on how ASI can enhance teamwork processes in complex tasks. Teams of three participants collaborated with ASI advisors designed to exhibit Artificial Theory of Mind (AToM) while engaged in an interdependent task. A profiling model was used to categorize teams based on their taskwork and teamwork potential and study how these influenced perceptions of team processes and ASI advisors. Results indicated that teams with higher taskwork or teamwork potential had more positive perceptions of their team processes, with those high in both dimensions showing the most favorable views. However, team performance significantly mediated these perceptions, suggesting that objective outcomes strongly influence subjective impressions of teammates. Notably, perceptions of the ASI advisors were not significantly affected by team performance but were positively correlated with higher taskwork and teamwork potential. The study highlights the need for ASI systems to be adaptable and responsive to the specific traits of human teams to be perceived as effective teammates.
Collapse
Affiliation(s)
- Rhyse Bendell
- Cognitive Sciences Laboratory, Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
| | - Jessica Williams
- Cognitive Sciences Laboratory, Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
| | - Stephen M. Fiore
- Cognitive Sciences Laboratory, Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
- Department of Philosophy, University of Central Florida, Orlando, FL, United States
| | - Florian Jentsch
- Team Performance Laboratory, Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
2
|
Tamura K, Morita S. Analysing public goods games using reinforcement learning: effect of increasing group size on cooperation. ROYAL SOCIETY OPEN SCIENCE 2024; 11:241195. [PMID: 39665088 PMCID: PMC11631413 DOI: 10.1098/rsos.241195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 10/10/2024] [Accepted: 11/11/2024] [Indexed: 12/13/2024]
Abstract
Electricity competition, restrictions on carbon dioxide (CO2) emissions and arm races between nations are examples of social dilemmas within human society. In the presence of social dilemmas, rational choice in game theory leads to the avoidance of cooperative behaviour owing to its cost. However, in experiments using public goods games that simulate social dilemmas, humans have often exhibited cooperative behaviour that deviates from individual rationality. Despite extensive research, the alignment between human cooperative behaviour and game theory predictions remains inconsistent. This study proposes an alternative approach to solve this problem. We used Q-learning, a form of artificial intelligence that mimics decision-making processes of humans who do not possess the rationality assumed in game theory. This study explores the potential for cooperation by varying the number of participants in public goods games using deep Q-learning. The simulations demonstrate that agents with Q-learning can acquire cooperative behaviour similar to that of humans. Moreover, we found that cooperation is more likely to occur as the group size increases. These results support and reinforce existing experiments involving humans. In addition, they have potential applications for creating cooperation without sanctions.
Collapse
Affiliation(s)
- Kazuhiro Tamura
- Department of Environment and Energy Systems, Graduate School of Science and Technology, Shizuoka University, Hamamatsu432-8561, Japan
| | - Satoru Morita
- Department of Mathematical and Systems Engineering, Shizuoka University, Hamamatsu432-8561, Japan
| |
Collapse
|
3
|
Badawy W, Zinhom H, Shaban M. Navigating ethical considerations in the use of artificial intelligence for patient care: A systematic review. Int Nurs Rev 2024. [PMID: 39545614 DOI: 10.1111/inr.13059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 10/19/2024] [Indexed: 11/17/2024]
Abstract
AIM To explore the ethical considerations and challenges faced by nursing professionals in integrating artificial intelligence (AI) into patient care. BACKGROUND AI's integration into nursing practice enhances clinical decision-making and operational efficiency but raises ethical concerns regarding privacy, accountability, informed consent, and the preservation of human-centered care. METHODS A systematic review was conducted, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Thirteen studies were selected from databases including PubMed, Embase, IEEE Xplore, PsycINFO, and CINAHL. Thematic analysis identified key ethical themes related to AI use in nursing. RESULTS The review highlighted critical ethical challenges, such as data privacy and security, accountability for AI-driven decisions, transparency in AI decision-making, and maintaining the human touch in care. The findings underscore the importance of stakeholder engagement, continuous education for nurses, and robust governance frameworks to guide ethical AI implementation in nursing. DISCUSSION The results align with existing literature on AI's ethical complexities in healthcare. Addressing these challenges requires strengthening nursing competencies in AI, advocating for patient-centered AI design, and ensuring that AI integration upholds ethical standards. CONCLUSION Although AI offers significant benefits for nursing practice, it also introduces ethical challenges that must be carefully managed. Enhancing nursing education, promoting stakeholder engagement, and developing comprehensive policies are essential for ethically integrating AI into nursing. IMPLICATIONS FOR NURSING AI can improve clinical decision-making and efficiency, but nurses must actively preserve humanistic care aspects through ongoing education and involvement in AI governance. IMPLICATIONS FOR HEALTH POLICY Establish ethical frameworks and data protection policies tailored to AI in nursing. Support continuous professional development and allocate resources for the ethical integration of AI in healthcare.
Collapse
Affiliation(s)
- Walaa Badawy
- Department of Psychology, College of Education, King Khaled University, Abha, Saudi Arabia
| | - Haithm Zinhom
- Mohammed Bin Zayed University for Humanities, Abu Dhabi, UAE
| | - Mostafa Shaban
- Community Health Nursing Department, College of Nursing, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
4
|
Chen CH, Lee WI. Exploring Nurses' Behavioural Intention to Adopt AI Technology: The Perspectives of Social Influence, Perceived Job Stress and Human-Machine Trust. J Adv Nurs 2024. [PMID: 39340769 DOI: 10.1111/jan.16495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 09/12/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024]
Abstract
AIM This study examines how social influence, human-machine trust and perceived job stress affect nurses' behavioural intentions towards AI-assisted care technology adoption from a new perspective and framework. It also explores the interrelationships between different types of social influence and job stress dimensions to fill gaps in academic literature. DESIGN A quantitative cross-sectional study. METHODS Five hospitals in Taiwan that had implemented AI solutions were selected using purposive sampling. The scales, adapted from relevant literature, were translated into Chinese and modified for context. Questionnaires were distributed to nurses via snowball sampling from May 15 to June 10, 2023. A total of 283 valid questionnaires were analysed using the partial least squares structural equation modelling method. RESULTS Conformity, obedience and human-machine trust were positively correlated with behavioural intention, while compliance was negatively correlated. Perceived job stress did not significantly affect behavioural intention. Compliance was positively associated with all three job stress dimensions: job uncertainty, technophobia and time pressure, while obedience was correlated with job uncertainty. CONCLUSION Social influence and human-machine trust are critical factors in nurses' intentions to adopt AI technology. The lack of significant effects from perceived stress suggests that nurses' personal resources mitigate potential stress associated with AI implementation. The study reveals the complex dynamics regarding different types of social influence, human-machine trust and job stress in the context of AI adoption in healthcare. IMPACT This research extends beyond conventional technology acceptance models by incorporating perspectives on organisational internal stressors and AI-related job stress. It offers insights into the coping mechanisms during the pre-adaption AI process in nursing, highlighting the need for nuanced management approaches. The findings emphasise the importance of considering technological and psychosocial factors in successful AI implementation in healthcare settings. PATIENT OR PUBLIC CONTRIBUTION No Patient or Public Contribution.
Collapse
Affiliation(s)
- Chin-Hung Chen
- College of Management, National Kaohsiung University of Science and Technology, Kaohsiung City, Taiwan
| | - Wan-I Lee
- Department of Marketing and Distribution Management, National Kaohsiung University of Science and Technology (First Campus), Kaohsiung City, Taiwan
| |
Collapse
|
5
|
Schmutz JB, Outland N, Kerstan S, Georganta E, Ulfert AS. AI-teaming: Redefining collaboration in the digital era. Curr Opin Psychol 2024; 58:101837. [PMID: 39024969 DOI: 10.1016/j.copsyc.2024.101837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/11/2024] [Accepted: 06/25/2024] [Indexed: 07/20/2024]
Abstract
Integrating artificial intelligence (AI) into human teams, forming human-AI teams (HATs), is a rapidly evolving field. This overview examines the complexities of team constellations and dynamics, trust in AI teammates, and shared cognition within HATs. Adding an AI teammate often reduces coordination, communication, and trust. Further, trust in AI tends to decline over time due to initial overestimation of capabilities, impairing teamwork. Despite AI's potential to enhance performance in contexts like chess and medicine, HATs frequently underperform due to poor team cognition and inadequate mutual understanding. Future research must address these issues with interdisciplinary collaboration between computer science and psychology and advance robust theoretical frameworks to realize the full potential of human-AI teaming.
Collapse
Affiliation(s)
- Jan B Schmutz
- Department of Psychology, University of Zurich, Switzerland.
| | - Neal Outland
- Department of Psychology, University of Georgia, United States
| | - Sophie Kerstan
- Department of Management, Technology and Economics, ETH Zurich, Switzerland
| | - Eleni Georganta
- Faculty of Social and Behavioural Sciences, University of Amsterdam, the Netherlands
| | - Anna-Sophie Ulfert
- Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|
6
|
Alodjants AP, Tsarev DV, Avdyushina AE, Khrennikov AY, Boukhanovsky AV. Quantum-inspired modeling of distributed intelligence systems with artificial intelligent agents self-organization. Sci Rep 2024; 14:15438. [PMID: 38965278 PMCID: PMC11224413 DOI: 10.1038/s41598-024-65684-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 06/24/2024] [Indexed: 07/06/2024] Open
Abstract
Distributed intelligence systems (DIS) containing natural and artificial intelligence agents (NIA and AIA) for decision making (DM) belong to promising interdisciplinary studies aimed at digitalization of routine processes in industry, economy, management, and everyday life. In this work, we suggest a novel quantum-inspired approach to investigate the crucial features of DIS consisting of NIAs (users) and AIAs (digital assistants, or avatars). We suppose that N users and their avatars are located in N nodes of a complex avatar - avatar network. The avatars can receive information from and transmit it to each other within this network, while the users obtain information from the outside. The users are associated with their digital assistants and cannot communicate with each other directly. Depending on the meaningfulness/uselessness of the information presented by avatars, users show their attitude making emotional binary "like"/"dislike" responses. To characterize NIA cognitive abilities in a simple DM process, we propose a mapping procedure for the Russell's valence-arousal circumplex model onto an effective quantum-like two-level system. The DIS aims to maximize the average satisfaction of users via AIAs long-term adaptation to their users. In this regard, we examine the opinion formation and social impact as a result of the collective emotional state evolution occurring in the DIS. We show that generalized cooperativity parameters G i , i = 1 , ⋯ , N introduced in this work play a significant role in DIS features reflecting the users activity in possible cooperation and responses to their avatar suggestions. These parameters reveal how frequently AIAs and NIAs communicate with each other accounting the cognitive abilities of NIAs and information losses within the network. We demonstrate that conditions for opinion formation and social impact in the DIS are relevant to the second-order non-equilibrium phase transition. The transition establishes a non-vanishing average information field inherent to information diffusion and long-term avatar adaptation to their users. It occurs above the phase transition threshold, i.e. atG i > 1 , which implies small (residual) social polarization of the NIAs community. Below the threshold, at weak AIA-NIA coupling (G i ≤ 1 ), many uncertainties in the DIS inhibit opinion formation and social impact for the DM agents due to the information diffusion suppression; the AIAs self-organization within the avatar-avatar network is elucidated in this limit. To increase the users' impact, we suggest an adaptive approach by establishing a network-dependent coupling rate with their digital assistants. In this case, the mechanism of AIA control helps resolve the DM process in the presence of some uncertainties resulting from the variety of users' preferences. Our findings open new perspectives in different areas where AIAs become effective teammates for humans to solve common routine problems in network organizations.
Collapse
Affiliation(s)
| | - D V Tsarev
- ITMO University, St. Petersburg, Russia, 197101
| | | | - A Yu Khrennikov
- International Center for Mathematical Modeling in Physics, Engineering, Economics, and Cognitive Science Linnaeus University, 35195, Vaxjo-Kalmar, Sweden.
| | | |
Collapse
|
7
|
Correia F, Melo FS, Paiva A. When a Robot Is Your Teammate. Top Cogn Sci 2024; 16:527-553. [PMID: 36573665 DOI: 10.1111/tops.12634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/19/2022] [Accepted: 11/02/2022] [Indexed: 12/28/2022]
Abstract
Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.
Collapse
Affiliation(s)
- Filipa Correia
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
- ITI, LARSyS, Instituto Superior Técnico, Universidade de Lisboa
| | | | - Ana Paiva
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
| |
Collapse
|
8
|
Wiltshire TJ, van Eijndhoven K, Halgas E, Gevers JMP. Prospects for Augmenting Team Interactions with Real-Time Coordination-Based Measures in Human-Autonomy Teams. Top Cogn Sci 2024; 16:391-429. [PMID: 35261211 DOI: 10.1111/tops.12606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 02/02/2022] [Accepted: 02/04/2022] [Indexed: 11/26/2022]
Abstract
Complex work in teams requires coordination across team members and their technology as well as the ability to change and adapt over time to achieve effective performance. To support such complex interactions, recent efforts have worked toward the design of adaptive human-autonomy teaming systems that can provide feedback in or near real time to achieve the desired individual or team results. However, while significant advancements have been made to better model and understand the dynamics of team interaction and its relationship with task performance, appropriate measures of team coordination and computational methods to detect changes in coordination have not yet been widely investigated. Having the capacity to measure coordination in real time is quite promising as it provides the opportunity to provide adaptive feedback that may influence and regulate teams' coordination patterns and, ultimately, drive effective team performance. A critical requirement to reach this potential is having the theoretical and empirical foundation from which to do so. Therefore, the first goal of the paper is to review approaches to coordination dynamics, identify current research gaps, and draw insights from other areas, such as social interaction, relationship science, and psychotherapy. The second goal is to collate extant work on feedback and advance ideas for adaptive feedback systems that have potential to influence coordination in a way that can enhance the effectiveness of team interactions. In addressing these two goals, this work lays the foundation as well as plans for the future of human-autonomy teams that augment team interactions using coordination-based measures.
Collapse
Affiliation(s)
- Travis J Wiltshire
- Department of Cognitive Science and Artificial Intelligence, Tilburg University
| | | | - Elwira Halgas
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology
| | - Josette M P Gevers
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology
| |
Collapse
|
9
|
Cooke NJ, Cohen MC, Fazio WC, Inderberg LH, Johnson CJ, Lematta GJ, Peel M, Teo A. From Teams to Teamness: Future Directions in the Science of Team Cognition. HUMAN FACTORS 2024; 66:1669-1680. [PMID: 36946439 PMCID: PMC11044519 DOI: 10.1177/00187208231162449] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 02/15/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE We review the current state-of-the-art in team cognition research, but more importantly describe the limitations of existing theories, laboratory paradigms, and measures considering the increasing complexities of modern teams and the study of team cognition. BACKGROUND Research on, and applications of, team cognition has led to theories, data, and measures over the last several decades. METHOD This article is based on research questions generated in a spring 2022 seminar on team cognition at Arizona State University led by the first author. RESULTS Future research directions are proposed for extending the conceptualization of teams and team cognition by examining dimensions of teamness; extending laboratory paradigms to attain more realistic teaming, including nonhuman teammates; and advancing measures of team cognition in a direction such that data can be collected unobtrusively, in real time, and automatically. CONCLUSION The future of team cognition is one of the new discoveries, new research paradigms, and new measures. APPLICATION Extending the concepts of teams and team cognition can also extend the potential applications of these concepts.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Aaron Teo
- Arizona State University, Mesa, AZ, USA
| |
Collapse
|
10
|
Tsamados A, Floridi L, Taddeo M. Human control of AI systems: from supervision to teaming. AI AND ETHICS 2024; 5:1535-1548. [PMID: 40352578 PMCID: PMC12058881 DOI: 10.1007/s43681-024-00489-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 04/27/2024] [Indexed: 05/14/2025]
Abstract
This article reviews two main approaches to human control of AI systems: supervisory human control and human-machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system behaviour to ensure that AI systems are effective throughout their deployment. Specifically, the article looks at how the two approaches differ in their conceptual and practical adequacy regarding the control of AI systems based on foundation models--i.e., models trained on vast datasets, exhibiting general capabilities, and producing non-deterministic behaviour. The article focuses on examples from the defence and security domain to highlight practical challenges in terms of human control of automation in general, and AI in particular, and concludes by arguing that approaches to human control are better served by an understanding of control as the product of collaborative agency in a multi-agent system rather than of exclusive human supervision.
Collapse
Affiliation(s)
| | | | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, UK
- Alan Turing Institute, London, UK
| |
Collapse
|
11
|
Nixon N, Lin Y, Snow L. Catalyzing Equity in STEM Teams: Harnessing Generative AI for Inclusion and Diversity. POLICY INSIGHTS FROM THE BEHAVIORAL AND BRAIN SCIENCES 2024; 11:85-92. [PMID: 38516055 PMCID: PMC10950550 DOI: 10.1177/23727322231220356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Collaboration is key to STEM, where multidisciplinary team research can solve complex problems. However, inequality in STEM fields hinders their full potential, due to persistent psychological barriers in underrepresented students' experience. This paper documents teamwork in STEM and explores the transformative potential of computational modeling and generative AI in promoting STEM-team diversity and inclusion. Leveraging generative AI, this paper outlines two primary areas for advancing diversity, equity, and inclusion. First, formalizing collaboration assessment with inclusive analytics can capture fine-grained learner behavior. Second, adaptive, personalized AI systems can support diversity and inclusion in STEM teams. Four policy recommendations highlight AI's capacity: formalized collaborative skill assessment, inclusive analytics, funding for socio-cognitive research, human-AI teaming for inclusion training. Researchers, educators, and policymakers can build an equitable STEM ecosystem. This roadmap advances AI-enhanced collaboration, offering a vision for the future of STEM where diverse voices are actively encouraged and heard within collaborative scientific endeavors.
Collapse
Affiliation(s)
- Nia Nixon
- University of California, Irvine, California, USA
| | - Yiwen Lin
- University of California, Irvine, California, USA
| | - Lauren Snow
- University of California, Irvine, California, USA
| |
Collapse
|
12
|
Jarvenpaa SL, Keating E. Fluid teams in the metaverse: exploring the (un)familiar. Front Psychol 2024; 14:1323586. [PMID: 38268798 PMCID: PMC10806196 DOI: 10.3389/fpsyg.2023.1323586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/20/2023] [Indexed: 01/26/2024] Open
Abstract
The metaverse is a new and evolving environment for fluid teams and their coordination in organizations. Fluid teams may have no prior familiarity with each other or working together. Yet fluid teams are known to benefit from a degree of familiarity-knowledge about teams, members, and working together-in team coordination and performance. The metaverse is unfamiliar territory that promises fluidity in contexts-seamless traversal between physical and virtual worlds. This fluidity in contexts has implications for familiarity in interaction, identity, and potentially time. We explore the opportunities and challenges that the metaverse presents in terms of (un)familiarity. Improved understandings of (un)familiarity may pave the way for new forms of fluid team experiences and uses.
Collapse
Affiliation(s)
- Sirkka L. Jarvenpaa
- Center for Business, Technology and Law, McCombs School of Business, The University of Texas at Austin, Austin, TX, United States
| | - Elizabeth Keating
- Department of Anthropology, The University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
13
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
14
|
Hagemann V, Rieth M, Suresh A, Kirchner F. Human-AI teams-Challenges for a team-centered AI at work. Front Artif Intell 2023; 6:1252897. [PMID: 37829660 PMCID: PMC10565103 DOI: 10.3389/frai.2023.1252897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 09/04/2023] [Indexed: 10/14/2023] Open
Abstract
As part of the Special Issue topic "Human-Centered AI at Work: Common Ground in Theories and Methods," we present a perspective article that looks at human-AI teamwork from a team-centered AI perspective, i. e., we highlight important design aspects that the technology needs to fulfill in order to be accepted by humans and to be fully utilized in the role of a team member in teamwork. Drawing from the model of an idealized teamwork process, we discuss the teamwork requirements for successful human-AI teaming in interdependent and complex work domains, including e.g., responsiveness, situation awareness, and flexible decision-making. We emphasize the need for team-centered AI that aligns goals, communication, and decision making with humans, and outline the requirements for such team-centered AI from a technical perspective, such as cognitive competence, reinforcement learning, and semantic communication. In doing so, we highlight the challenges and open questions associated with its implementation that need to be solved in order to enable effective human-AI teaming.
Collapse
Affiliation(s)
- Vera Hagemann
- Business Psychology and Human Resources, Faculty of Business Studies and Economics, University of Bremen, Bremen, Germany
| | - Michèle Rieth
- Business Psychology and Human Resources, Faculty of Business Studies and Economics, University of Bremen, Bremen, Germany
| | - Amrita Suresh
- Robotics Research Group, Faculty of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Frank Kirchner
- Robotics Research Group, Faculty of Mathematics and Computer Science, University of Bremen, Bremen, Germany
- DFKI GmbH, Robotics Innovation Center, Bremen, Germany
| |
Collapse
|
15
|
Aggarwal I, Cuconato G, Ateş NY, Meslec N. Self-beliefs, Transactive Memory Systems, and Collective Identification in Teams: Articulating the Socio-Cognitive Underpinnings of COHUMAIN. Top Cogn Sci 2023. [PMID: 37402241 DOI: 10.1111/tops.12681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/20/2023] [Accepted: 06/20/2023] [Indexed: 07/06/2023]
Abstract
Socio-cognitive theory conceptualizes individual contributors as both enactors of cognitive processes and targets of a social context's determinative influences. The present research investigates how contributors' metacognition or self-beliefs, combine with others' views of themselves to inform collective team states related to learning about other agents (i.e., transactive memory systems) and forming social attachments with other agents (i.e., collective team identification), both important teamwork states that have implications for team collective intelligence. We test the predictions in a longitudinal study with 78 teams. Additionally, we provide interview data from industry experts in human-artificial intelligence teams. Our findings contribute to an emerging socio-cognitive architecture for COllective HUman-MAchine INtelligence (i.e., COHUMAIN) by articulating its underpinnings in individual and collective cognition and metacognition. Our resulting model has implications for the critical inputs necessary to design and enable a higher level of integration of human and machine teammates.
Collapse
Affiliation(s)
- Ishani Aggarwal
- Brazilian School of Public and Business Administration, Fundação Getulio Vargas
| | - Gabriela Cuconato
- Department of Organizational Behavior, Weatherhead School of Management, Case Western Reserve University
| | | | | |
Collapse
|
16
|
Valeri JA, Soenksen LR, Collins KM, Ramesh P, Cai G, Powers R, Angenent-Mari NM, Camacho DM, Wong F, Lu TK, Collins JJ. BioAutoMATED: An end-to-end automated machine learning tool for explanation and design of biological sequences. Cell Syst 2023; 14:525-542.e9. [PMID: 37348466 PMCID: PMC10700034 DOI: 10.1016/j.cels.2023.05.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 02/17/2023] [Accepted: 05/22/2023] [Indexed: 06/24/2023]
Abstract
The design choices underlying machine-learning (ML) models present important barriers to entry for many biologists who aim to incorporate ML in their research. Automated machine-learning (AutoML) algorithms can address many challenges that come with applying ML to the life sciences. However, these algorithms are rarely used in systems and synthetic biology studies because they typically do not explicitly handle biological sequences (e.g., nucleotide, amino acid, or glycan sequences) and cannot be easily compared with other AutoML algorithms. Here, we present BioAutoMATED, an AutoML platform for biological sequence analysis that integrates multiple AutoML methods into a unified framework. Users are automatically provided with relevant techniques for analyzing, interpreting, and designing biological sequences. BioAutoMATED predicts gene regulation, peptide-drug interactions, and glycan annotation, and designs optimized synthetic biology components, revealing salient sequence characteristics. By automating sequence modeling, BioAutoMATED allows life scientists to incorporate ML more readily into their work.
Collapse
Affiliation(s)
- Jacqueline A Valeri
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA; Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Luis R Soenksen
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
| | - Katherine M Collins
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Department of Engineering, University of Cambridge, Trumpington St, Cambridge CB2 1PZ, UK
| | - Pradeep Ramesh
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| | - George Cai
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| | - Rani Powers
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA; Pluto Biosciences, Golden, CO 80402, USA
| | - Nicolaas M Angenent-Mari
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| | - Diogo M Camacho
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| | - Felix Wong
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Timothy K Lu
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Synthetic Biology Group, Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - James J Collins
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA; Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Harvard-MIT Program in Health Sciences and Technology, Cambridge, MA 02139, USA; Abdul Latif Jameel Clinic for Machine Learning in Health, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
17
|
Malhotra G, Ramalingam M. Perceived anthropomorphism and purchase intention using artificial intelligence technology: examining the moderated effect of trust. JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT 2023. [DOI: 10.1108/jeim-09-2022-0316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
PurposeThis study explores features that impact consumers' purchase intention through artificial intelligence (AI), because it is believed that through artificial intelligence, consumers' intention to purchase grows significantly, especially in the retail sector, whereby retailers provide lucrative offers to motivate consumers. The study develops a theoretical framework based on media-richness theory to investigate the role of perceived anthropomorphism toward an intention to purchase products using AI.Design/methodology/approachThe study is based on cross-sectional data through an online survey. The data have been analyzed using PLS-SEM and SPSS PROCESS macro.FindingsThe results show that consumers tend to demand anthropomorphized products to gain a better shopping experience and, therefore, demand features that attract and motivate them to purchase through artificial intelligence via mediating variables, such as perceived animacy and perceived intelligence. Moreover, trust in artificial intelligence moderates the relationship between perceived anthropomorphism and perceived animacy.Originality/valueThe study investigates and concludes with managerial and academic insights into consumer purchase intention through artificial intelligence in the retail and marketing sector.
Collapse
|
18
|
Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. GROUP & ORGANIZATION MANAGEMENT 2023. [DOI: 10.1177/10596011231160574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
We theorize about human-team collaboration with AI by drawing upon research in groups and teams, social psychology, information systems, engineering, and beyond. Based on our review, we focus on two main issues in the teams and AI arena. The first is whether the team generally views AI positively or negatively. The second is whether the decision to use AI is left up to the team members (voluntary use of AI) or mandated by top management or other policy-setters in the organization. These two aspects guide our creation of a team-level conceptual framework modeling how AI introduced as a mandated addition to the team can have asymmetric effects on collaboration level depending on the team’s attitudes about AI. When AI is viewed positively by the team, the effect of mandatory use suppresses collaboration in the team. But when a team has negative attitudes toward AI, mandatory use elevates team collaboration. Our model emphasizes the need for managing team attitudes and discretion strategies and promoting new research directions regarding AI’s implications for teamwork.
Collapse
Affiliation(s)
| | | | - Chester Spell
- Rutgers University School of Business, Camden NJ, USA
| | | | | |
Collapse
|
19
|
Begerowski SR, Hedrick KN, Waldherr F, Mears L, Shuffler ML. The forgotten teammate: Considering the labor perspective in human-autonomy teams. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
20
|
Zhang G, Chong L, Kotovsky K, Cagan J. Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Dolata M, Katsiuba D, Wellnhammer N, Schwabe G. Learning with Digital Agents: An Analysis based on the Activity Theory. J MANAGE INFORM SYST 2023. [DOI: 10.1080/07421222.2023.2172775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
22
|
Elshan E, Ebel P, Söllner M, Leimeister JM. Leveraging Low Code Development of Smart Personal Assistants: An Integrated Design Approach with the SPADE Method. J MANAGE INFORM SYST 2023. [DOI: 10.1080/07421222.2023.2172776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
23
|
Joksimovic S, Ifenthaler D, Marrone R, De Laat M, Siemens G. Opportunities of artificial intelligence for supporting complex problem-solving: Findings from a scoping review. COMPUTERS AND EDUCATION: ARTIFICIAL INTELLIGENCE 2023; 4:100138. [DOI: 10.1016/j.caeai.2023.100138] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
24
|
Hagos DH, Rawat DB. Recent Advances in Artificial Intelligence and Tactical Autonomy: Current Status, Challenges, and Perspectives. SENSORS (BASEL, SWITZERLAND) 2022; 22:9916. [PMID: 36560285 PMCID: PMC9782095 DOI: 10.3390/s22249916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/13/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings.
Collapse
|
25
|
Artificial intelligence in science: An emerging general method of invention. RESEARCH POLICY 2022. [DOI: 10.1016/j.respol.2022.104604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
Chen A, Xiang M, Wang M, Lu Y. Harmony in intelligent hybrid teams: the influence of the intellectual ability of artificial intelligence on human members’ reactions. INFORMATION TECHNOLOGY & PEOPLE 2022. [DOI: 10.1108/itp-01-2022-0059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThe purpose of this paper was to investigate the relationships among the intellectual ability of artificial intelligence (AI), cognitive emotional processes and the positive and negative reactions of human members. The authors also examined the moderating role of AI status in teams.Design/methodology/approachThe authors designed an experiment and recruited 120 subjects who were randomly distributed into one of three groups classified by the upper, middle and lower organization levels of AI in the team. The findings in this study were derived from subjects’ self-reports and their performance in the experiment.FindingsRegardless of the position held by AI, human members believed that its intelligence level is positively correlated with dependence behavior. However, when the AI and human members are at the same level, the higher the intelligence of AI, the more likely it is that its direct interaction with team members will lead to conflicts.Research limitations/implicationsThis paper only focuses on human–AI harmony in transactional work in hybrid teams in enterprises. As AI applications permeate, it should be considered whether the findings can be extended to a broader range of AI usage scenarios.Practical implicationsThese results are helpful for understanding how to improve team performance in light of the fact that team members have introduced AI into their enterprises in large quantities.Originality/valueThis study contributes to the literature on how the intelligence level of AI affects the positive and negative behaviors of human members in hybrid teams. The study also innovatively introduces “status” into hybrid organizations.
Collapse
|
27
|
Abbas SM, Liu Z, Khushnood M. When Human Meets Technology: Unlocking Hybrid Intelligence Role in Breakthrough Innovation Engagement via Self-Extension and Social Intelligence. JOURNAL OF COMPUTER INFORMATION SYSTEMS 2022. [DOI: 10.1080/08874417.2022.2139781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
| | - Zhiqiang Liu
- Huazhong University of Science and Technology, Wuhan, China
| | | |
Collapse
|
28
|
Xiong W, Wang C, Liang M. Partner or subordinate? Sequential risky decision-making behaviors under human-machine collaboration contexts. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of AI-human collaboration. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102533] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
30
|
Cranefield J, Winikoff M, Chiu YT, Li Y, Doyle C, Richter A. Partnering with AI: the case of digital productivity assistants. J R Soc N Z 2022; 53:95-118. [PMID: 39439996 PMCID: PMC11459757 DOI: 10.1080/03036758.2022.2114507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 08/15/2022] [Indexed: 10/14/2022]
Abstract
An emerging class of intelligent tools that we term Digital Productivity Assistants (DPAs) is designed to help workers improve their productivity and keep their work-life balance in check. Using personalised work-based analytics it raises awareness of individual collaboration behaviour and suggests improvements to work practices. The purpose of this study is to contribute to a better understanding of the role of personalised work-based analytics in the context of (improving) individual productivity and work-life balance. We present an interpretive case study based on interviews with 28 workers who face high job demands and job variety and our own observations. Our study contributes to the still ongoing sensemaking of AI, by illustrating how DPAs can co-regulate human work through technology affordances. In addition to investigating these opportunities of partnering with AI, we study the perceived barriers that impede DPAs' potential benefits as partners. These include perceived accuracy, transparency, feedback, and configurability, as well as misalignment between the DPA's categorisations of work behaviour and the categorisations used by workers in their jobs.
Collapse
Affiliation(s)
- Jocelyn Cranefield
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Michael Winikoff
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Yi-Te Chiu
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Yevgeniya Li
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Cathal Doyle
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Alex Richter
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| |
Collapse
|
31
|
Role of machine and organizational structure in science. PLoS One 2022; 17:e0272280. [PMID: 35951620 PMCID: PMC9371286 DOI: 10.1371/journal.pone.0272280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 07/15/2022] [Indexed: 11/30/2022] Open
Abstract
The progress of science increasingly relies on machine learning (ML) and machines work alongside humans in various domains of science. This study investigates the team structure of ML-related projects and analyzes the contribution of ML to scientific knowledge production under different team structure, drawing on bibliometric analyses of 25,000 scientific publications in various disciplines. Our regression analyses suggest that (1) interdisciplinary collaboration between domain scientists and computer scientists as well as the engagement of interdisciplinary individuals who have expertise in both domain and computer sciences are common in ML-related projects; (2) the engagement of interdisciplinary individuals seem more important in achieving high impact and novel discoveries, especially when a project employs computational and domain approaches interdependently; and (3) the contribution of ML and its implication to team structure depend on the depth of ML.
Collapse
|
32
|
Hauptman AI, Schelble BG, McNeese NJ, Madathil KC. Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
33
|
|
34
|
The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102568] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
35
|
Mumtaz H, Saqib M, Ansar F, Zargar D, Hameed M, Hasan M, Muskan P. The future of Cardiothoracic surgery in Artificial intelligence. Ann Med Surg (Lond) 2022; 80:104251. [PMID: 36045824 PMCID: PMC9422274 DOI: 10.1016/j.amsu.2022.104251] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 12/23/2022] Open
Abstract
Humans' great and quick technological breakthroughs in the previous decade have undoubtedly influenced how surgical procedures are executed in the operating room. AI is becoming incredibly influential for surgical decision-making to help surgeons make better projections about the implications of surgical operations by considering different sources of data such as patient health conditions, disease natural history, patient values, and finance. Although the application of artificial intelligence in healthcare settings is rapidly increasing, its mainstream application in clinical practice remains limited. The use of machine learning algorithms in thoracic surgery is extensive, including different clinical stages. By leveraging techniques such as machine learning, computer vision, and robotics, AI may play a key role in diagnostic augmentation, operative management, pre-and post-surgical patient management, and upholding safety standards. AI, particularly in complex surgical procedures such as cardiothoracic surgery, may be a significant help to surgeons in executing more intricate surgeries with greater success, fewer complications, and ensuring patient safety, while also providing resources for robust research and better dissemination of knowledge. In this paper, we present an overview of AI applications in thoracic surgery and its related components, including contemporary projects and technology that use AI in cardiothoracic surgery and general care. We also discussed the future of AI and how high-tech operating rooms will use human-machine collaboration to improve performance and patient safety, as well as its future directions and limitations. It is vital for the surgeons to keep themselves acquainted with the latest technological advancement in AI order to grasp this technology and easily integrate it into clinical practice when it becomes accessible. This review is a great addition to literature, keeping practicing and aspiring surgeons up to date on the most recent advances in AI and cardiothoracic surgery. This literature review tells about the role of Artificial Intelligence in Cardiothoracic Surgery. Discussed the future of AI and how high-tech operating rooms will use human-machine collaboration to improve performance and patient safety, as well as its future directions and limitations. Vital for the surgeons to keep themselves acquainted with the latest technological advancement in AI order to grasp this technology and easily integrate it into clinical practice when it becomes accessible.
Collapse
|
36
|
O’Neill T, McNeese N, Barron A, Schelble B. Human-Autonomy Teaming: A Review and Analysis of the Empirical Literature. HUMAN FACTORS 2022; 64:904-938. [PMID: 33092417 PMCID: PMC9284085 DOI: 10.1177/0018720820960865] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
OBJECTIVE We define human-autonomy teaming and offer a synthesis of the existing empirical research on the topic. Specifically, we identify the research environments, dependent variables, themes representing the key findings, and critical future research directions. BACKGROUND Whereas a burgeoning literature on high-performance teamwork identifies the factors critical to success, much less is known about how human-autonomy teams (HATs) achieve success. Human-autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents. Autonomous agents involve a degree of self-government and self-directed behavior (agency), and autonomous agents take on a unique role or set of tasks and work interdependently with human team members to achieve a shared objective. METHOD We searched the literature on human-autonomy teaming. To meet our criteria for inclusion, the paper needed to involve empirical research and meet our definition of human-autonomy teaming. We found 76 articles that met our criteria for inclusion. RESULTS We report on research environments and we find that the key independent variables involve autonomous agent characteristics, team composition, task characteristics, human individual differences, training, and communication. We identify themes for each of these and discuss the future research needs. CONCLUSION There are areas where research findings are clear and consistent, but there are many opportunities for future research. Particularly important will be research that identifies mechanisms linking team input to team output variables.
Collapse
Affiliation(s)
- Thomas O’Neill
- University of Calgary, Calgary, AB, Canada
- Curtin University, WA, Australia
- Thomas O’Neill, Department of Psychology, University
of Calgary, AB, T2N 1N4, Canada;
| | | | | | | |
Collapse
|
37
|
Patterns of Sociotechnical Design Preferences of Chatbots for Intergenerational Collaborative Innovation: A Q Methodology Study. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/8206503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Chatbot technology is increasingly emerging as a virtual assistant. Chatbots could allow individuals and organizations to accomplish objectives that are currently not fully optimized for collaboration across an intergenerational context. This paper explores the preferences of chatbots as a companion in intergenerational innovation. The Q methodology was used to investigate different types of collaborators and determine how different choices occur between collaborators that merge the problem and solution domains of chatbots’ design within intergenerational settings. The study’s findings reveal that various chatbot design priorities are more diverse among younger adults than senior adults. Additionally, our research further outlines the principles of chatbot design and how chatbots will support both generations. This research is the first step towards cultivating a deeper understanding of different age groups’ subjective design preferences for chatbots functioning as a companion in the workplace. Moreover, this study demonstrates how the Q methodology can guide technological development by shifting the approach from an age-focused design to a common goal-oriented design within a multigenerational context.
Collapse
|
38
|
An explanation space to align user studies with the technical development of Explainable AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01536-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
39
|
|
40
|
I vs. robot: Sociodigital self-comparisons in hybrid teams from a theoretical, empirical, and practical perspective. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00638-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractThis article in the journal Gruppe. Interaktion. Organisation. (GIO) introduces sociodigital self-comparisons (SDSC) as individual evaluations of own abilities in comparison to the knowledge and skills of a cooperating digital actor in a group. SDSC provide a complementary perspective for the acceptance and evaluation of human-robot interaction (HRI). As social robots enter the workplace, in addition to human-human comparisons, digital actors also become objects of comparisons (i.e., I vs. robot). To date, SDSC have not been systematically reflected in HRI. Therefore, we introduce SDSC from a theoretical perspective and reflect its significance in social robot applications. First, we conceptualize SDSC based on psychological theories and research on social comparison. Second, we illustrate the concept of SDSC for HRI using empirical data from 80 hybrid teams (two human actors and one autonomous agent) who worked together in an interdependent computer-simulated team task. SDSC in favor of the autonomous agent corresponded to functional (e.g., robot trust, or team efficacy) and dysfunctional (e.g., job threat) team-relevant variables, highlighting the two-sidedness of SDSC in hybrid teams. Third, we outline the (practical) potential of SDSC for social robots in the field and the lab.
Collapse
|
41
|
Bobko P, Hirshfield L, Eloy L, Spencer C, Doherty E, Driscoll J, Obolsky H. Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2086644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | | | - Lucca Eloy
- University of Colorado at Boulder, Boulder, CO, USA
| | - Cara Spencer
- University of Colorado at Boulder, Boulder, CO, USA
| | | | | | | |
Collapse
|
42
|
An interdisciplinary review of AI and HRM: Challenges and future directions. HUMAN RESOURCE MANAGEMENT REVIEW 2022. [DOI: 10.1016/j.hrmr.2022.100924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
43
|
Askarisichani O, Bullo F, Friedkin NE, Singh AK. Predictive models for human-AI nexus in group decision making. Ann N Y Acad Sci 2022; 1514:70-81. [PMID: 35581156 DOI: 10.1111/nyas.14783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Machine learning (ML) and artificial intelligence (AI) have had a profound impact on our lives. Domains like health and learning are naturally helped by human-AI interactions and decision making. In these areas, as ML algorithms prove their value in making important decisions, humans add their distinctive expertise and judgment on social and interpersonal issues that need to be considered in tandem with algorithmic inputs of information. Some questions naturally arise. What rules and regulations should be invoked on the employment of AI, and what protocols should be in place to evaluate available AI resources? What are the forms of effective communication and coordination with AI that best promote effective human-AI teamwork? In this review, we highlight factors that we believe are especially important in assembling and managing human-AI decision making in a group setting.
Collapse
Affiliation(s)
- Omid Askarisichani
- Department of Computer Science, University of California, Santa Barbara, California, USA
| | - Francesco Bullo
- Department of Mechanical Engineering, University of California, Santa Barbara, California, USA.,Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara, California, USA
| | - Noah E Friedkin
- Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara, California, USA.,Department of Sociology, University of California, Santa Barbara, California, USA
| | - Ambuj K Singh
- Department of Computer Science, University of California, Santa Barbara, California, USA
| |
Collapse
|
44
|
|
45
|
Gonzalez MF, Liu W, Shirase L, Tomczak DL, Lobbe CE, Justenhoven R, Martin NR. Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
46
|
Caldwell S, Sweetser P, O’Donnell N, Knight MJ, Aitchison M, Gedeon T, Johnson D, Brereton M, Gallagher M, Conroy D. An Agile New Research Framework for Hybrid Human-AI Teaming: Trust, Transparency, and Transferability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3514257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
Collapse
Affiliation(s)
| | | | | | | | | | - Tom Gedeon
- Australian National University, Australia
| | | | | | | | | |
Collapse
|
47
|
Le KBQ, Sajtos L, Fernandez KV. Employee-(ro)bot collaboration in service: an interdependence perspective. JOURNAL OF SERVICE MANAGEMENT 2022. [DOI: 10.1108/josm-06-2021-0232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeCollaboration between frontline employees (FLEs) and frontline robots (FLRs) is expected to play a vital role in service delivery in these increasingly disrupted times. Firms are facing the challenge of designing effective FLE-FLR collaborations to enhance customer experience. This paper develops a framework to explore the potential of FLE-FLR collaboration through the lens of interdependence in customer service experience and advances research that specifically focuses on employee-robot team development.Design/methodology/approachThis paper uses a conceptual approach rooted in the interdependence theory, team design, management, robotics and automation literature.FindingsThis paper proposes and defines the Frontline employee – Frontline robot interdependence (FLERI) concept based on three structural components of an interdependent relationship – joint goal, joint workflow and joint decision-making authority. It also provides propositions that outline the potential impact of FLERI on customer experience and employee performance, and outline several boundary conditions that could enhance or inhibit those effects.Practical implicationsManagerial insights into designing an employee-robot team in service delivery are provided.Originality/valueThis study is the first to propose a novel conceptual framework (FLERI) that focuses on the notion of human-robot collaboration in service settings.
Collapse
|
48
|
Jiang Y, Li X, Luo H, Yin S, Kaynak O. Quo vadis artificial intelligence? DISCOVER ARTIFICIAL INTELLIGENCE 2022; 2:4. [DOI: 10.1007/s44163-022-00022-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/28/2022] [Indexed: 12/14/2022]
Abstract
AbstractThe study of artificial intelligence (AI) has been a continuous endeavor of scientists and engineers for over 65 years. The simple contention is that human-created machines can do more than just labor-intensive work; they can develop human-like intelligence. Being aware or not, AI has penetrated into our daily lives, playing novel roles in industry, healthcare, transportation, education, and many more areas that are close to the general public. AI is believed to be one of the major drives to change socio-economical lives. In another aspect, AI contributes to the advancement of state-of-the-art technologies in many fields of study, as helpful tools for groundbreaking research. However, the prosperity of AI as we witness today was not established smoothly. During the past decades, AI has struggled through historical stages with several winters. Therefore, at this juncture, to enlighten future development, it is time to discuss the past, present, and have an outlook on AI. In this article, we will discuss from a historical perspective how challenges were faced on the path of revolution of both the AI tools and the AI systems. Especially, in addition to the technical development of AI in the short to mid-term, thoughts and insights are also presented regarding the symbiotic relationship of AI and humans in the long run.
Collapse
|
49
|
Artificial intelligence and Multidisciplinary Team Meetings; A communication challenge for radiologists' sense of agency and position as spider in a web? Eur J Radiol 2022; 155:110231. [DOI: 10.1016/j.ejrad.2022.110231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 02/24/2022] [Accepted: 03/01/2022] [Indexed: 11/19/2022]
|
50
|
Unlocking the value of artificial intelligence in human resource management through AI capability framework. HUMAN RESOURCE MANAGEMENT REVIEW 2022. [DOI: 10.1016/j.hrmr.2022.100899] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|