1
|
Schmutz JB, Outland N, Kerstan S, Georganta E, Ulfert AS. AI-teaming: Redefining collaboration in the digital era. Curr Opin Psychol 2024; 58:101837. [PMID: 39024969 DOI: 10.1016/j.copsyc.2024.101837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/11/2024] [Accepted: 06/25/2024] [Indexed: 07/20/2024]
Abstract
Integrating artificial intelligence (AI) into human teams, forming human-AI teams (HATs), is a rapidly evolving field. This overview examines the complexities of team constellations and dynamics, trust in AI teammates, and shared cognition within HATs. Adding an AI teammate often reduces coordination, communication, and trust. Further, trust in AI tends to decline over time due to initial overestimation of capabilities, impairing teamwork. Despite AI's potential to enhance performance in contexts like chess and medicine, HATs frequently underperform due to poor team cognition and inadequate mutual understanding. Future research must address these issues with interdisciplinary collaboration between computer science and psychology and advance robust theoretical frameworks to realize the full potential of human-AI teaming.
Collapse
Affiliation(s)
- Jan B Schmutz
- Department of Psychology, University of Zurich, Switzerland.
| | - Neal Outland
- Department of Psychology, University of Georgia, United States
| | - Sophie Kerstan
- Department of Management, Technology and Economics, ETH Zurich, Switzerland
| | - Eleni Georganta
- Faculty of Social and Behavioural Sciences, University of Amsterdam, the Netherlands
| | - Anna-Sophie Ulfert
- Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|
2
|
Hauptman AI, Mallick R, Flathmann C, McNeese NJ. Human factors considerations for the context-aware design of adaptive autonomous teammates. ERGONOMICS 2024:1-17. [PMID: 39056233 DOI: 10.1080/00140139.2024.2380341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 07/09/2024] [Indexed: 07/28/2024]
Abstract
Despite the gains in performance that AI can bring to human-AI teams, they also present them with new challenges, such as the decline in human ability to respond to AI failures as the AI becomes more autonomous. This challenge is particularly dangerous in human-AI teams, where the AI holds a unique role in the team's success. Thus, it is imperative that researchers find solutions for designing AI team-mates that consider their human team-mates' needs in their adaptation logic. This study explores adaptive autonomy as a solution to overcoming these challenges. We conducted twelve contextual inquiries with professionals in two teaming contexts in order to understand how human teammate perceptions can be used to determine optimal autonomy levels for AI team-mates. The results of this study will enable the human factors community to develop AI team-mates that can enhance their team's performance while avoiding the potentially devastating impacts of their failures.
Collapse
Affiliation(s)
| | - Rohit Mallick
- School of Computing, Clemson University, Clemson, South Carolina
| | | | - Nathan J McNeese
- School of Computing, Clemson University, Clemson, South Carolina
| |
Collapse
|
3
|
Michailovs S, Pond S, Irons J, Salmon PM, Visser TAW, Schmitt M, Stanton NA, Strickland L, Huf S, Loft S. The effect of information integration on team communication in a simulated submarine control room task. ERGONOMICS 2024:1-25. [PMID: 39016112 DOI: 10.1080/00140139.2024.2375365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 06/23/2024] [Indexed: 07/18/2024]
Abstract
Submarine control rooms are characterised by dedicated individual roles for information types (e.g. Sonar operator processes sound energy), with individuals verbally reporting the information that they receive to other team members to help resolve uncertainty in the operational environment (low information integration). We compared this work design with one that ensured critical information was more readily available to all team members (high information integration). We used the Event Analysis of Systemic Teamwork (EAST) method to analyse task, information, and social networks for novice teams operating within a simulated submarine control room under low versus high information integration. Integration impacted team member centrality (importance relative to other operators) and the nature of information shared. Team members with greater centrality reported higher workload. Higher integration across consoles altered how team members interacted and their relative status, the information shared, and how workload was distributed. However, overall network structures remained intact.
Collapse
Affiliation(s)
| | - Stephen Pond
- The University of Western Australia, Perth, Australia
| | - Jessica Irons
- Defence Science and Technology Group (Australia), Fairbairn, Australia
| | - Paul M Salmon
- University of the Sunshine Coast, Sippy Downs, Australia
| | | | - Megan Schmitt
- Defence Science and Technology Group (Australia), Fairbairn, Australia
| | | | - Luke Strickland
- The University of Western Australia, Perth, Australia
- Curtin University, Perth, Australia
| | - Sam Huf
- Defence Science and Technology Group (Australia), Fairbairn, Australia
| | - Shayne Loft
- The University of Western Australia, Perth, Australia
| |
Collapse
|
4
|
Alodjants AP, Tsarev DV, Avdyushina AE, Khrennikov AY, Boukhanovsky AV. Quantum-inspired modeling of distributed intelligence systems with artificial intelligent agents self-organization. Sci Rep 2024; 14:15438. [PMID: 38965278 PMCID: PMC11224413 DOI: 10.1038/s41598-024-65684-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 06/24/2024] [Indexed: 07/06/2024] Open
Abstract
Distributed intelligence systems (DIS) containing natural and artificial intelligence agents (NIA and AIA) for decision making (DM) belong to promising interdisciplinary studies aimed at digitalization of routine processes in industry, economy, management, and everyday life. In this work, we suggest a novel quantum-inspired approach to investigate the crucial features of DIS consisting of NIAs (users) and AIAs (digital assistants, or avatars). We suppose that N users and their avatars are located in N nodes of a complex avatar - avatar network. The avatars can receive information from and transmit it to each other within this network, while the users obtain information from the outside. The users are associated with their digital assistants and cannot communicate with each other directly. Depending on the meaningfulness/uselessness of the information presented by avatars, users show their attitude making emotional binary "like"/"dislike" responses. To characterize NIA cognitive abilities in a simple DM process, we propose a mapping procedure for the Russell's valence-arousal circumplex model onto an effective quantum-like two-level system. The DIS aims to maximize the average satisfaction of users via AIAs long-term adaptation to their users. In this regard, we examine the opinion formation and social impact as a result of the collective emotional state evolution occurring in the DIS. We show that generalized cooperativity parameters G i , i = 1 , ⋯ , N introduced in this work play a significant role in DIS features reflecting the users activity in possible cooperation and responses to their avatar suggestions. These parameters reveal how frequently AIAs and NIAs communicate with each other accounting the cognitive abilities of NIAs and information losses within the network. We demonstrate that conditions for opinion formation and social impact in the DIS are relevant to the second-order non-equilibrium phase transition. The transition establishes a non-vanishing average information field inherent to information diffusion and long-term avatar adaptation to their users. It occurs above the phase transition threshold, i.e. atG i > 1 , which implies small (residual) social polarization of the NIAs community. Below the threshold, at weak AIA-NIA coupling (G i ≤ 1 ), many uncertainties in the DIS inhibit opinion formation and social impact for the DM agents due to the information diffusion suppression; the AIAs self-organization within the avatar-avatar network is elucidated in this limit. To increase the users' impact, we suggest an adaptive approach by establishing a network-dependent coupling rate with their digital assistants. In this case, the mechanism of AIA control helps resolve the DM process in the presence of some uncertainties resulting from the variety of users' preferences. Our findings open new perspectives in different areas where AIAs become effective teammates for humans to solve common routine problems in network organizations.
Collapse
Affiliation(s)
| | - D V Tsarev
- ITMO University, St. Petersburg, Russia, 197101
| | | | - A Yu Khrennikov
- International Center for Mathematical Modeling in Physics, Engineering, Economics, and Cognitive Science Linnaeus University, 35195, Vaxjo-Kalmar, Sweden.
| | | |
Collapse
|
5
|
Lum HC, Phillips EK. Understanding Human-Autonomy Teams Through a Human-Animal Teaming Model. Top Cogn Sci 2024; 16:554-567. [PMID: 38015100 DOI: 10.1111/tops.12713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 08/29/2023] [Accepted: 11/14/2023] [Indexed: 11/29/2023]
Abstract
The relationship between humans and animals is complex and influenced by multiple variables. Humans display a remarkably flexible and rich array of social competencies, demonstrating the ability to interpret, predict, and react appropriately to the behavior of others, as well as to engage others in a variety of complex social interactions. Developing computational systems that have similar social abilities is a critical step in designing robots, animated characters, and other computer agents that appear intelligent and capable in their interactions with humans and each other. Further, it will improve their ability to cooperate with people as capable partners, learn from natural instruction, and provide intuitive and engaging interactions for human partners. Thus, human-animal team analogs can be one means through which to foster veridical mental models of robots that provide a more accurate representation of their near-future capabilities. Some digital twins of human-animal teams currently exist but are often incomplete. Therefore, this article focuses on issues within and surrounding the current models of human-animal teams, previous research surrounding this connection, and the challenges when using such an analogy for human-autonomy teams.
Collapse
Affiliation(s)
- Heather C Lum
- Human Systems Engineering, Fulton School of Engineering, Arizona State University
| | - Elizabeth K Phillips
- Human Factors and Applied Cognition Group, Department of Psychology, George Mason University
| |
Collapse
|
6
|
Bowden V, Long D, Loft S. Reducing the Costs of Automation Failure by Providing Voluntary Automation Checking Tools. HUMAN FACTORS 2024; 66:1817-1829. [PMID: 37500496 PMCID: PMC11089824 DOI: 10.1177/00187208231190980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 07/07/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVE We investigated the extent to which a voluntary-use range and bearing line (RBL) tool improves return-to-manual performance when supervising high-degree conflict detection automation in simulated air traffic control. BACKGROUND High-degree automation typically benefits routine performance and reduces workload, but can degrade return-to-manual performance if automation fails. We reasoned that providing a voluntary checking tool (RBL) would support automation failure detection, but also that automation induced complacency could extend to nonoptimal use of such tools. METHOD Participants were assigned to one of three conditions, where conflict detection was either performed: manually, with RBLs available to use (Manual + RBL), automatically with RBLs (Auto + RBL), or automatically without RBLs (Auto). Voluntary-use RBLs allowed participants to reliably check aircraft conflict status. Automation failed once. RESULTS RBLs improved automation failure detection - with participants intervening faster and making fewer false alarms when provided RBLs compared to not (Auto + RBL vs Auto). However, a cost of high-degree automation remained, with participants slower to intervene to the automation failure than to an identical manual conflict event (Auto + RBL vs Manual + RBL). There was no difference in RBL engagement time between Auto + RBL and Manual + RBL conditions, suggesting participants noticed the conflict event at the same time. CONCLUSIONS The cost of automation may have arisen from participants' reconciling which information to trust: the automation (which indicated no conflict and had been perfectly reliable prior to failing) or the RBL (which indicated a conflict). APPLICATIONS Providing a mechanism for checking the validity of high-degree automation may facilitate human supervision of automation.
Collapse
Affiliation(s)
- Vanessa Bowden
- School of Psychological Science, The University of Western Australia, Crawley, Australia
| | - Dale Long
- School of Psychological Science, The University of Western Australia, Crawley, Australia
| | - Shayne Loft
- School of Psychological Science, The University of Western Australia, Crawley, Australia
| |
Collapse
|
7
|
Ribeiro JG, Henriques LM, Colcher S, Duarte JC, Melo FS, Milidiú RL, Sardinha A. HOTSPOT: An ad hoc teamwork platform for mixed human-robot teams. PLoS One 2024; 19:e0305705. [PMID: 38941305 PMCID: PMC11213323 DOI: 10.1371/journal.pone.0305705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 06/04/2024] [Indexed: 06/30/2024] Open
Abstract
Ad hoc teamwork is a research topic in multi-agent systems whereby an agent (the "ad hoc agent") must successfully collaborate with a set of unknown agents (the "teammates") without any prior coordination or communication protocol. However, research in ad hoc teamwork is predominantly focused on agent-only teams, but not on agent-human teams, which we believe is an exciting research avenue and has enormous application potential in human-robot teams. This paper will tap into this potential by proposing HOTSPOT, the first framework for ad hoc teamwork in human-robot teams. Our framework comprises two main modules, addressing the two key challenges in the interaction between a robot acting as the ad hoc agent and human teammates. First, a decision-theoretic module that is responsible for all task-related decision-making (task identification, teammate identification, and planning). Second, a communication module that uses natural language processing to parse all communication between the robot and the human. To evaluate our framework, we use a task where a mobile robot and a human cooperatively collect objects in an open space, illustrating the main features of our framework in a real-world task.
Collapse
Affiliation(s)
- João G. Ribeiro
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
| | - Luis Müller Henriques
- Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brasil
| | - Sérgio Colcher
- Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brasil
| | - Julio Cesar Duarte
- Seção de Ensino de Engenharia de Computação, Instituto Militar de Engenharia, Rio de Janeiro, Brasil
| | - Francisco S. Melo
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
| | - Ruy Luiz Milidiú
- Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brasil
| | - Alberto Sardinha
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
- Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brasil
| |
Collapse
|
8
|
Humr S, Canan M. Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making. ENTROPY (BASEL, SWITZERLAND) 2024; 26:500. [PMID: 38920509 PMCID: PMC11202881 DOI: 10.3390/e26060500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 05/17/2024] [Accepted: 05/18/2024] [Indexed: 06/27/2024]
Abstract
Human decision-making is increasingly supported by artificial intelligence (AI) systems. From medical imaging analysis to self-driving vehicles, AI systems are becoming organically embedded in a host of different technologies. However, incorporating such advice into decision-making entails a human rationalization of AI outputs for supporting beneficial outcomes. Recent research suggests intermediate judgments in the first stage of a decision process can interfere with decisions in subsequent stages. For this reason, we extend this research to AI-supported decision-making to investigate how intermediate judgments on AI-provided advice may influence subsequent decisions. In an online experiment (N = 192), we found a consistent bolstering effect in trust for those who made intermediate judgments and over those who did not. Furthermore, violations of total probability were observed at all timing intervals throughout the study. We further analyzed the results by demonstrating how quantum probability theory can model these types of behaviors in human-AI decision-making and ameliorate the understanding of the interaction dynamics at the confluence of human factors and information features.
Collapse
Affiliation(s)
- Scott Humr
- Department of Information Sciences, Naval Postgraduate School, Monterey, CA 93943, USA;
| | | |
Collapse
|
9
|
Li M, Erickson IM. It's Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation. HUMAN FACTORS 2024; 66:1724-1741. [PMID: 37116009 PMCID: PMC11044523 DOI: 10.1177/00187208231166624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
OBJECTIVE The objective of this study was to estimate trust from conversations using both lexical and acoustic data. BACKGROUND As NASA moves to long-duration space exploration operations, the increasing need for cooperation between humans and virtual agents requires real-time trust estimation by virtual agents. Measuring trust through conversation is a novel and unintrusive approach. METHOD A 2 (reliability) × 2 (cycles) × 3 (events) within-subject study with habitat system maintenance was designed to elicit various levels of trust in a conversational agent. Participants had trust-related conversations with the conversational agent at the end of each decision-making task. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). After training, model explanation was performed using variable importance and partial dependence plots. RESULTS Results showed that a random forest algorithm, trained using the combined lexical and acoustic features, predicted trust in the conversational agent most accurately ( R a d j 2 = 0.71 ) . The most important predictors were a combination of lexical and acoustic cues: average sentiment considering valence shifters, the mean of formants, and Mel-frequency cepstral coefficients (MFCC). These conversational features were identified as partial mediators predicting people's trust. CONCLUSION Precise trust estimation from conversation requires lexical cues and acoustic cues. APPLICATION These results showed the possibility of using conversational data to measure trust, and potentially other dynamic mental states, unobtrusively and dynamically.
Collapse
Affiliation(s)
- Mengyao Li
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Isabel M Erickson
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
10
|
Gall J, Stanton CJ. Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming. Front Artif Intell 2024; 7:1273350. [PMID: 38742120 PMCID: PMC11089226 DOI: 10.3389/frai.2024.1273350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (N = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants' power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.
Collapse
Affiliation(s)
- Jody Gall
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Parramatta, NSW, Australia
| | - Christopher J. Stanton
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Parramatta, NSW, Australia
| |
Collapse
|
11
|
Hoesterey S, Onnasch L. A New Experimental Paradigm to Manipulate Risk in Human-Automation Research. HUMAN FACTORS 2024; 66:1170-1185. [PMID: 36257770 PMCID: PMC10903125 DOI: 10.1177/00187208221133878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE Two studies serve as a manipulation check of a new experimental multi-task paradigm that can be applied to human-automation research (Virtual Reality Testbed for Risk and Automation Studies; ViRTRAS), in which a subjectively experienceable risk can be manipulated as part of a virtual reality environment. BACKGROUND Risk has been postulated as an important contextual factor affecting human-automation interaction. However, experimental evidence is scarce due to the difficulty operationalizing risk in an ethical way. In the new paradigm, risk is varied by the altitude at which participants carry out the task, including the possibility of virtually falling in case of a mistake. METHOD Key components of the paradigm were used to investigate participants' risk perception in a low (0.5 m) and high altitude (70 m) using subjective self-reports and objective behavioral measures. RESULTS In the high-altitude condition risk perception was significantly higher with medium to large effect sizes. In addition, results of the behavioral measures reveal that participants habituated with length of exposure. However, this habituation seems to occur similarly in both altitude conditions. CONCLUSION The manipulation checks were successful. The new paradigm is a promising tool for automation research. It incorporates the contextual factor of risk and creates a situation which is more comparable to what real-life operators experience. Additionally, it meets the same requirements of other multi-task environments in human-automation research. APPLICATION The new paradigm provides the basis to vary the contextual factor of risk in human-automation research, which has previously been either neglected or operationalized in an arguably inferior way.
Collapse
|
12
|
Kim JG, Gonzalo JD, Chen I, Vo A, Lupi C, Hyderi A, Haidet P, DeWaters A, Blatt B, Holmboe E, Thompson LR, Jimenez J, Madigosky W, Chung PJ. How a Team Effectiveness Approach to Health Systems Science Can Illuminate Undergraduate Medical Education Outcomes. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:374-380. [PMID: 38166319 DOI: 10.1097/acm.0000000000005619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
ABSTRACT Health care delivery requires physicians to operate in teams to successfully navigate complexity in caring for patients and communities. The importance of training physicians early in core concepts of working in teams (i.e., "teaming") has long been established. Over the past decade, however, little evidence of team effectiveness training for medical students has been available. The recent introduction of health systems science as a third pillar of medical education provides an opportunity to teach and prepare students to work in teams and achieve related core competencies across the medical education continuum and health care delivery settings. Although educators and health care system leaders have emphasized the teaching and learning of team-based care, conceptual models and evidence that inform effective teaming within all aspects of undergraduate medical education (including classroom, clinical, and community settings) are needed to advance the science regarding learning and working in teams. Anchoring teaming through the core foundational theory of team effectiveness and its operational components could catalyze the empirical study of medical student teams, uncover modifiable factors that lead to the evidence for improved student learning, and improve the link among competency-based assessments between undergraduate medical education and graduate medical education. In this article, authors articulate several implications for medical schools through 5 conceptual areas: admissions, the design and teaching of team effectiveness in health systems science curricula, the related competency-based assessments, and course and program evaluations. The authors then discuss the relevance of the measurable components and intended outcomes to team effectiveness in undergraduate medical education as critical to successfully prepare students for teaming in clerkships and eventually residency and clinical practice.
Collapse
|
13
|
Schelble BG, Lopez J, Textor C, Zhang R, McNeese NJ, Pak R, Freeman G. Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming. HUMAN FACTORS 2024; 66:1037-1055. [PMID: 35938319 DOI: 10.1177/00187208221116952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. BACKGROUND While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. METHOD Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. RESULTS Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. CONCLUSION Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. APPLICATION This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Collapse
Affiliation(s)
- Beau G Schelble
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | - Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Claire Textor
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Rui Zhang
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Guo Freeman
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| |
Collapse
|
14
|
Nguyen TN, Gonzalez C. Minimap: An interactive dynamic decision making game for search and rescue missions. Behav Res Methods 2024; 56:2311-2332. [PMID: 37553537 DOI: 10.3758/s13428-023-02149-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/19/2023] [Indexed: 08/10/2023]
Abstract
Many aspects of humans' dynamic decision-making (DDM) behaviors have been studied with computer-simulated games called microworlds. However, most microworlds only emphasize specific elements of DDM and are inflexible in generating a variety of environments and experimental designs. Moreover, despite the ubiquity of gridworld games for Artificial Intelligence (AI) research, only some tools exist to aid in the development of browser-based gridworld environments for studying the dynamics of human decision-making behavior. To address these issues, we introduce Minimap, a dynamic interactive game to examine DDM in search and rescue missions, which incorporates all the essential characteristics of DDM and offers a wide range of flexibility regarding experimental setups and the creation of experimental scenarios. Minimap specifically allows customization of dynamics, complexity, opaqueness, and dynamic complexity when designing a DDM task. Minimap also enables researchers to visualize and replay recorded human trajectories for the analysis of human behavior. To demonstrate the utility of Minimap, we present a behavioral experiment that examines the impact of different degrees of structural complexity coupled with the opaqueness of the environment on human decision-making performance under time constraints. We discuss the potential applications of Minimap in improving productivity and transparent replications of human behavior and human-AI teaming research. We made Minimap an open-source tool, freely available at https://github.com/DDM-Lab/MinimapInteractiveDDMGame .
Collapse
Affiliation(s)
- Thuy Ngoc Nguyen
- Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, 15213, USA
| | - Cleotilde Gonzalez
- Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, 15213, USA.
| |
Collapse
|
15
|
Harrison JL, Zhou S, Scalia MJ, Grimm DAP, Demir M, McNeese NJ, Cooke NJ, Gorman JC. Communication Strategies in Human-Autonomy Teams During Technological Failures. HUMAN FACTORS 2024:187208231222119. [PMID: 38192266 DOI: 10.1177/00187208231222119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
OBJECTIVE This study examines low-, medium-, and high-performing Human-Autonomy Teams' (HATs') communication strategies during various technological failures that impact routine communication strategies to adapt to the task environment. BACKGROUND Teams must adapt their communication strategies during dynamic tasks, where more successful teams make more substantial adaptations. Adaptations in communication strategies may explain how successful HATs overcome technological failures. Further, technological failures of variable severity may alter communication strategies of HATs at different performance levels in their attempts to overcome each failure. METHOD HATs in a Remotely Piloted Aircraft System-Synthetic Task Environment (RPAS-STE), involving three team members, were tasked with photographing targets. Each triad had two randomly assigned participants in navigator and photographer roles, teaming with an experimenter who simulated an AI pilot in a Wizard of Oz paradigm. Teams encountered two different technological failures, automation and autonomy, where autonomy failures were more challenging to overcome. RESULTS High-performing HATs calibrated their communication strategy to the complexity of the different failures better than medium- and low-performing teams. Further, HATs adjusted their communication strategies over time. Finally, only the most severe failures required teams to increase the efficiency of their communication. CONCLUSION HAT effectiveness under degraded conditions depends on the type of communication strategies enacted by the team. Previous findings from studies of all-human teams apply here; however, novel results suggest information requests are particularly important to HAT success during failures. APPLICATION Understanding the communication strategies of HATs under degraded conditions can inform training protocols to help HATs overcome failures.
Collapse
|
16
|
van de Merwe K. Agent Transparency, Situation Awareness, Mental Workload, and Operator Performance: A Systematic Literature Review. HUMAN FACTORS 2024; 66:180-208. [PMID: 35274577 PMCID: PMC10756021 DOI: 10.1177/00187208221077804] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE In this review, we investigate the relationship between agent transparency, Situation Awareness, mental workload, and operator performance for safety critical domains. BACKGROUND The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation's inner workings observable (i.e., "seeing-into"). However, experimental support for this has not been systematically documented to date. METHOD Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. RESULTS Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. CONCLUSION Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance, without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. APPLICATION Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
Collapse
Affiliation(s)
- Koen van de Merwe
- Koen van de Merwe, Group Research and Development, DNV, Veritasveien 1, Høvik, Oslo 1363, Norway; e-mail:
| |
Collapse
|
17
|
Patton CE, Wickens CD, Smith CAP, Noble KM, Clegg BA. Supporting detection of hostile intentions: automated assistance in a dynamic decision-making context. Cogn Res Princ Implic 2023; 8:69. [PMID: 37980697 PMCID: PMC10657914 DOI: 10.1186/s41235-023-00519-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/29/2023] [Indexed: 11/21/2023] Open
Abstract
In a dynamic decision-making task simulating basic ship movements, participants attempted, through a series of actions, to elicit and identify which one of six other ships was exhibiting either of two hostile behaviors. A high-performing, although imperfect, automated attention aid was introduced. It visually highlighted the ship categorized by an algorithm as the most likely to be hostile. Half of participants also received automation transparency in the form of a statement about why the hostile ship was highlighted. Results indicated that while the aid's advice was often complied with and hence led to higher accuracy with a shorter response time, detection was still suboptimal. Additionally, transparency had limited impacts on all aspects of performance. Implications for detection of hostile intentions and the challenges of supporting dynamic decision making are discussed.
Collapse
Affiliation(s)
- Colleen E Patton
- Department of Psychology, Colorado State University, Fort Collins, USA.
| | | | - C A P Smith
- Department of Psychology, Colorado State University, Fort Collins, USA
| | - Kayla M Noble
- Department of Psychology, Colorado State University, Fort Collins, USA
| | | |
Collapse
|
18
|
Wallinheimo AS, Evans SL, Davitti E. Training in new forms of human-AI interaction improves complex working memory and switching skills of language professionals. Front Artif Intell 2023; 6:1253940. [PMID: 38045765 PMCID: PMC10690806 DOI: 10.3389/frai.2023.1253940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 10/19/2023] [Indexed: 12/05/2023] Open
Abstract
AI-related technologies used in the language industry, including automatic speech recognition (ASR) and machine translation (MT), are designed to improve human efficiency. However, humans are still in the loop for accuracy and quality, creating a working environment based on Human-AI Interaction (HAII). Very little is known about these newly-created working environments and their effects on cognition. The present study focused on a novel practice, interlingual respeaking (IRSP), where real-time subtitles in another language are created through the interaction between a human and ASR software. To this end, we set up an experiment that included a purpose-made training course on IRSP over 5 weeks, investigating its effects on cognition, and focusing on executive functioning (EF) and working memory (WM). We compared the cognitive performance of 51 language professionals before and after the course. Our variables were reading span (a complex WM measure), switching skills, and sustained attention. IRSP training course improved complex WM and switching skills but not sustained attention. However, the participants were slower after the training, indicating increased vigilance with the sustained attention tasks. Finally, complex WM was confirmed as the primary competence in IRSP. The reasons and implications of these findings will be discussed.
Collapse
Affiliation(s)
- Anna-Stiina Wallinheimo
- Centre for Translation Studies, Faculty of Arts and Social Sciences (FASS), University of Surrey, Guildford, United Kingdom
- School of Psychology, Faculty of Health and Medical Sciences (FHMS), University of Surrey, Guildford, United Kingdom
| | - Simon L. Evans
- School of Psychology, Faculty of Health and Medical Sciences (FHMS), University of Surrey, Guildford, United Kingdom
| | - Elena Davitti
- Centre for Translation Studies, Faculty of Arts and Social Sciences (FASS), University of Surrey, Guildford, United Kingdom
| |
Collapse
|
19
|
Bocklisch F, Huchler N. Humans and cyber-physical systems as teammates? Characteristics and applicability of the human-machine-teaming concept in intelligent manufacturing. Front Artif Intell 2023; 6:1247755. [PMID: 38028669 PMCID: PMC10655019 DOI: 10.3389/frai.2023.1247755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/10/2023] [Indexed: 12/01/2023] Open
Abstract
The paper explores and comments on the theoretical concept of human-machine-teaming in intelligent manufacturing. Industrial production is an important area of work applications and should be developed toward a more anthropocentric Industry 4.0/5.0. Teaming is used a design metaphor for human-centered integration of workers and complex cyber-physical-production systems using artificial intelligence. Concrete algorithmic solutions for technical processes should be based on theoretical concepts. A combination of literature scoping review and commentary was used to identify key characteristics for teaming applicable to the work environment addressed. From the body of literature, five criteria were selected and commented on. Two characteristics seemed particularly promising to guide the development of human-centered artificial intelligence and create tangible benefits in the mid-term: complementarity and shared knowledge/goals. These criteria are outlined with two industrial examples: human-robot-collaboration in assembly and intelligent decision support in thermal spraying. The main objective of the paper is to contribute to the discourse on human-centered artificial intelligence by exploring the theoretical concept of human-machine-teaming from a human-oriented perspective. Future research should focus on the empirical implementation and evaluation of teaming characteristics from different transdisciplinary viewpoints.
Collapse
Affiliation(s)
- Franziska Bocklisch
- Department of Mechanical Engineering, Chemnitz University of Technology, Chemnitz, Germany
- Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| | | |
Collapse
|
20
|
Naikar N, Brady A, Moy G, Kwok HW. Designing human-AI systems for complex settings: ideas from distributed, joint, and self-organising perspectives of sociotechnical systems and cognitive work analysis. ERGONOMICS 2023; 66:1669-1694. [PMID: 38018437 DOI: 10.1080/00140139.2023.2281898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 11/06/2023] [Indexed: 11/30/2023]
Abstract
Real-world events like the COVID-19 pandemic and wildfires in Australia, Europe, and America remind us that the demands of complex operational settings are met by multiple, distributed teams interwoven with a large array of artefacts and networked technologies, including automation. Yet, current models of human-automation interaction, including those intended for human-machine teaming or collaboration, tend to be dyadic in nature, assuming individual humans interacting with individual machines. Given the opportunities and challenges of emerging artificial intelligence (AI) technologies, and the growing interest of many organisations in utilising these technologies in complex operations, we suggest turning to contemporary perspectives of sociotechnical systems for a way forward. We show how ideas of distributed cognition, joint cognitive systems, and self-organisation lead to specific concepts for designing human-AI systems, and propose that design frameworks informed by contemporary views of complex work performance are needed. We discuss cognitive work analysis as an example.
Collapse
Affiliation(s)
| | | | - Glennn Moy
- Defence Science and Technology Group, Australia
| | | |
Collapse
|
21
|
Berretta S, Tausch A, Ontrup G, Gilles B, Peifer C, Kluge A. Defining human-AI teaming the human-centered way: a scoping review and network analysis. Front Artif Intell 2023; 6:1250725. [PMID: 37841234 PMCID: PMC10570436 DOI: 10.3389/frai.2023.1250725] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 09/11/2023] [Indexed: 10/17/2023] Open
Abstract
Introduction With the advancement of technology and the increasing utilization of AI, the nature of human work is evolving, requiring individuals to collaborate not only with other humans but also with AI technologies to accomplish complex goals. This requires a shift in perspective from technology-driven questions to a human-centered research and design agenda putting people and evolving teams in the center of attention. A socio-technical approach is needed to view AI as more than just a technological tool, but as a team member, leading to the emergence of human-AI teaming (HAIT). In this new form of work, humans and AI synergistically combine their respective capabilities to accomplish shared goals. Methods The aim of our work is to uncover current research streams on HAIT and derive a unified understanding of the construct through a bibliometric network analysis, a scoping review and synthetization of a definition from a socio-technical point of view. In addition, antecedents and outcomes examined in the literature are extracted to guide future research in this field. Results Through network analysis, five clusters with different research focuses on HAIT were identified. These clusters revolve around (1) human and (2) task-dependent variables, (3) AI explainability, (4) AI-driven robotic systems, and (5) the effects of AI performance on human perception. Despite these diverse research focuses, the current body of literature is predominantly driven by a technology-centric and engineering perspective, with no consistent definition or terminology of HAIT emerging to date. Discussion We propose a unifying definition combining a human-centered and team-oriented perspective as well as summarize what is still needed in future research regarding HAIT. Thus, this work contributes to support the idea of the Frontiers Research Topic of a theoretical and conceptual basis for human work with AI systems.
Collapse
Affiliation(s)
- Sophie Berretta
- Department of Psychology, Organizational, and Business Psychology, Ruhr University Bochum, Bochum, Germany
| | - Alina Tausch
- Department of Psychology, Organizational, and Business Psychology, Ruhr University Bochum, Bochum, Germany
| | - Greta Ontrup
- Department of Psychology, Organizational, and Business Psychology, Ruhr University Bochum, Bochum, Germany
| | - Björn Gilles
- Department of Psychology, Organizational, and Business Psychology, Ruhr University Bochum, Bochum, Germany
| | - Corinna Peifer
- Department of Psychology I, University of Lübeck, Lübeck, Germany
| | - Annette Kluge
- Department of Psychology, Organizational, and Business Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
22
|
Hagemann V, Rieth M, Suresh A, Kirchner F. Human-AI teams-Challenges for a team-centered AI at work. Front Artif Intell 2023; 6:1252897. [PMID: 37829660 PMCID: PMC10565103 DOI: 10.3389/frai.2023.1252897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 09/04/2023] [Indexed: 10/14/2023] Open
Abstract
As part of the Special Issue topic "Human-Centered AI at Work: Common Ground in Theories and Methods," we present a perspective article that looks at human-AI teamwork from a team-centered AI perspective, i. e., we highlight important design aspects that the technology needs to fulfill in order to be accepted by humans and to be fully utilized in the role of a team member in teamwork. Drawing from the model of an idealized teamwork process, we discuss the teamwork requirements for successful human-AI teaming in interdependent and complex work domains, including e.g., responsiveness, situation awareness, and flexible decision-making. We emphasize the need for team-centered AI that aligns goals, communication, and decision making with humans, and outline the requirements for such team-centered AI from a technical perspective, such as cognitive competence, reinforcement learning, and semantic communication. In doing so, we highlight the challenges and open questions associated with its implementation that need to be solved in order to enable effective human-AI teaming.
Collapse
Affiliation(s)
- Vera Hagemann
- Business Psychology and Human Resources, Faculty of Business Studies and Economics, University of Bremen, Bremen, Germany
| | - Michèle Rieth
- Business Psychology and Human Resources, Faculty of Business Studies and Economics, University of Bremen, Bremen, Germany
| | - Amrita Suresh
- Robotics Research Group, Faculty of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Frank Kirchner
- Robotics Research Group, Faculty of Mathematics and Computer Science, University of Bremen, Bremen, Germany
- DFKI GmbH, Robotics Innovation Center, Bremen, Germany
| |
Collapse
|
23
|
Bienefeld N, Kolbe M, Camen G, Huser D, Buehler PK. Human-AI teaming: leveraging transactive memory and speaking up for enhanced team effectiveness. Front Psychol 2023; 14:1208019. [PMID: 37599773 PMCID: PMC10436524 DOI: 10.3389/fpsyg.2023.1208019] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/20/2023] [Indexed: 08/22/2023] Open
Abstract
In this prospective observational study, we investigate the role of transactive memory and speaking up in human-AI teams comprising 180 intensive care (ICU) physicians and nurses working with AI in a simulated clinical environment. Our findings indicate that interactions with AI agents differ significantly from human interactions, as accessing information from AI agents is positively linked to a team's ability to generate novel hypotheses and demonstrate speaking-up behavior, but only in higher-performing teams. Conversely, accessing information from human team members is negatively associated with these aspects, regardless of team performance. This study is a valuable contribution to the expanding field of research on human-AI teams and team science in general, as it emphasizes the necessity of incorporating AI agents as knowledge sources in a team's transactive memory system, as well as highlighting their role as catalysts for speaking up. Practical implications include suggestions for the design of future AI systems and human-AI team training in healthcare and beyond.
Collapse
Affiliation(s)
- Nadine Bienefeld
- Work and Organizational Psychology, Department of Management, Technology, and Economics, ETH Zürich, Zurich, Switzerland
| | - Michaela Kolbe
- Institute of Intensive Care Medicine, University Hospital Zurich, Zurich, Switzerland
| | - Giovanni Camen
- Institute of Intensive Care Medicine, University Hospital Zurich, Zurich, Switzerland
| | - Dominic Huser
- Institute of Intensive Care Medicine, University Hospital Zurich, Zurich, Switzerland
| | - Philipp Karl Buehler
- Institute of Intensive Care Medicine, University Hospital Zurich, Zurich, Switzerland
- Department of Intensive Care Medicine, Cantonal Hospital Winterthur, Winterthur, Switzerland
| |
Collapse
|
24
|
Gupta P, Nguyen TN, Gonzalez C, Woolley AW. Fostering Collective Intelligence in Human-AI Collaboration: Laying the Groundwork for COHUMAIN. Top Cogn Sci 2023. [PMID: 37384870 DOI: 10.1111/tops.12679] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 06/12/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Artificial Intelligence (AI) powered machines are increasingly mediating our work and many of our managerial, economic, and cultural interactions. While technology enhances individual capability in many ways, how do we know that the sociotechnical system as a whole, consisting of a complex web of hundreds of human-machine interactions, is exhibiting collective intelligence? Research on human-machine interactions has been conducted within different disciplinary silos, resulting in social science models that underestimate technology and vice versa. Bringing together these different perspectives and methods at this juncture is critical. To truly advance our understanding of this important and quickly evolving area, we need vehicles to help research connect across disciplinary boundaries. This paper advocates for establishing an interdisciplinary research domain-Collective Human-Machine Intelligence (COHUMAIN). It outlines a research agenda for a holistic approach to designing and developing the dynamics of sociotechnical systems. In illustrating the kind of approach, we envision in this domain, we describe recent work on a sociocognitive architecture, the transactive systems model of collective intelligence, that articulates the critical processes underlying the emergence and maintenance of collective intelligence and extend it to human-AI systems. We connect this with synergistic work on a compatible cognitive architecture, instance-based learning theory and apply it to the design of AI agents that collaborate with humans. We present this work as a call to researchers working on related questions to not only engage with our proposal but also develop their own sociocognitive architectures and unlock the real potential of human-machine intelligence.
Collapse
Affiliation(s)
- Pranav Gupta
- Gies College of Business, University of Illinois, Urbana-Champaign
| | - Thuy Ngoc Nguyen
- Department of Social & Decision Sciences, Carnegie Mellon University
| | | | | |
Collapse
|
25
|
O'Neill TA, Flathmann C, McNeese NJ, Salas E. Human-autonomy Teaming: Need for a guiding team-based framework? COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
|
26
|
Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. GROUP & ORGANIZATION MANAGEMENT 2023. [DOI: 10.1177/10596011231160574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
We theorize about human-team collaboration with AI by drawing upon research in groups and teams, social psychology, information systems, engineering, and beyond. Based on our review, we focus on two main issues in the teams and AI arena. The first is whether the team generally views AI positively or negatively. The second is whether the decision to use AI is left up to the team members (voluntary use of AI) or mandated by top management or other policy-setters in the organization. These two aspects guide our creation of a team-level conceptual framework modeling how AI introduced as a mandated addition to the team can have asymmetric effects on collaboration level depending on the team’s attitudes about AI. When AI is viewed positively by the team, the effect of mandatory use suppresses collaboration in the team. But when a team has negative attitudes toward AI, mandatory use elevates team collaboration. Our model emphasizes the need for managing team attitudes and discretion strategies and promoting new research directions regarding AI’s implications for teamwork.
Collapse
Affiliation(s)
| | | | - Chester Spell
- Rutgers University School of Business, Camden NJ, USA
| | | | | |
Collapse
|
27
|
Begerowski SR, Hedrick KN, Waldherr F, Mears L, Shuffler ML. The forgotten teammate: Considering the labor perspective in human-autonomy teams. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
28
|
Vero: An accessible method for studying human-AI teamwork. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
29
|
Simpson J, Nalepka P, Kallen RW, Dras M, Reichle ED, Hosking SG, Best C, Richards D, Richardson MJ. Conversation dynamics in a multiplayer video game with knowledge asymmetry. Front Psychol 2022; 13:1039431. [PMID: 36405156 PMCID: PMC9669907 DOI: 10.3389/fpsyg.2022.1039431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/18/2022] [Indexed: 09/08/2024] Open
Abstract
Despite the challenges associated with virtually mediated communication, remote collaboration is a defining characteristic of online multiplayer gaming communities. Inspired by the teamwork exhibited by players in first-person shooter games, this study investigated the verbal and behavioral coordination of four-player teams playing a cooperative online video game. The game, Desert Herding, involved teams consisting of three ground players and one drone operator tasked to locate, corral, and contain evasive robot agents scattered across a large desert environment. Ground players could move throughout the environment, while the drone operator's role was akin to that of a "spectator" with a bird's-eye view, with access to veridical information of the locations of teammates and the to-be-corralled agents. Categorical recurrence quantification analysis (catRQA) was used to measure the communication dynamics of teams as they completed the task. Demands on coordination were manipulated by varying the ground players' ability to observe the environment with the use of game "fog." Results show that catRQA was sensitive to changes to task visibility, with reductions in task visibility reorganizing how participants conversed during the game to maintain team situation awareness. The results are discussed in the context of future work that can address how team coordination can be augmented with the inclusion of artificial agents, as synthetic teammates.
Collapse
Affiliation(s)
- James Simpson
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
| | - Patrick Nalepka
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, NSW, Australia
| | - Rachel W. Kallen
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, NSW, Australia
| | - Mark Dras
- School of Computing, Macquarie University, Sydney, NSW, Australia
| | - Erik D. Reichle
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, NSW, Australia
| | - Simon G. Hosking
- Human and Decision Sciences Division, Defence Science and Technology Group, Melbourne, VIC, Australia
| | - Christopher Best
- Human and Decision Sciences Division, Defence Science and Technology Group, Melbourne, VIC, Australia
| | - Deborah Richards
- School of Computing, Macquarie University, Sydney, NSW, Australia
| | - Michael J. Richardson
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
30
|
Lyons J, Highland P, Bos N, Lyons D, Skinner A, Schnell T, Hefron R. Measuring Perceived Agent Appropriateness in a Live-Flight Human-Autonomy Teaming Scenario. ERGONOMICS IN DESIGN 2022. [DOI: 10.1177/10648046221129393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
United States Air Force Test Pilot School students ( N = 6) participated in a study involving an agent-directed human pilot (“Blue agent”) in dogfighting scenarios against an adversary (“Red agent”). The adversary used three levels of difficulty as follows: low, medium, and high. An agent appropriateness scale was developed to gauge how appropriate the Blue agent’s behaviors were during each dogfight. Results demonstrated that agent appropriateness varied by Red agent difficulty. These results suggest that agent appropriateness is an essential element in human-autonomy teaming research. Practitioners should seek to develop agent appropriateness measures suitable for the particular context and technology in question.
Collapse
|
31
|
Nalepka P, Prants M, Stening H, Simpson J, Kallen RW, Dras M, Reichle ED, Hosking SG, Best C, Richardson MJ. Assessing Team Effectiveness by How Players Structure Their Search in a First-Person Multiplayer Video Game. Cogn Sci 2022; 46:e13204. [PMID: 36251464 PMCID: PMC9787020 DOI: 10.1111/cogs.13204] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 07/18/2022] [Accepted: 09/16/2022] [Indexed: 12/30/2022]
Abstract
People working as a team can achieve more than when working alone due to a team's ability to parallelize the completion of tasks. In collaborative search tasks, this necessitates the formation of effective division of labor strategies to minimize redundancies in search. For such strategies to be developed, team members need to perceive the task's relevant components and how they evolve over time, as well as an understanding of what others will do so that they can structure their own behavior to contribute to the team's goal. This study explored whether the capacity for team members to coordinate effectively can be related to how participants structure their search behaviors in an online multiplayer collaborative search task. Our results demonstrated that the structure of search behavior, quantified using detrended fluctuation analysis, was sensitive to contextual factors that limit a participant's ability to gather information. Further, increases in the persistence of movement fluctuations during search behavior were found as teams developed more effective coordinative strategies and were associated with better task performance.
Collapse
Affiliation(s)
- Patrick Nalepka
- School of Psychological SciencesMacquarie University,Centre for Elite Performance, Expertise and TrainingMacquarie University
| | | | | | - James Simpson
- School of Psychological SciencesMacquarie University
| | - Rachel W. Kallen
- School of Psychological SciencesMacquarie University,Centre for Elite Performance, Expertise and TrainingMacquarie University
| | - Mark Dras
- School of ComputingMacquarie University
| | - Erik D. Reichle
- School of Psychological SciencesMacquarie University,Centre for Elite Performance, Expertise and TrainingMacquarie University
| | - Simon G. Hosking
- Human and Decision Sciences DivisionDefence Science and Technology Group
| | - Christopher Best
- Human and Decision Sciences DivisionDefence Science and Technology Group
| | - Michael J. Richardson
- School of Psychological SciencesMacquarie University,Centre for Elite Performance, Expertise and TrainingMacquarie University
| |
Collapse
|
32
|
I vs. robot: Sociodigital self-comparisons in hybrid teams from a theoretical, empirical, and practical perspective. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00638-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractThis article in the journal Gruppe. Interaktion. Organisation. (GIO) introduces sociodigital self-comparisons (SDSC) as individual evaluations of own abilities in comparison to the knowledge and skills of a cooperating digital actor in a group. SDSC provide a complementary perspective for the acceptance and evaluation of human-robot interaction (HRI). As social robots enter the workplace, in addition to human-human comparisons, digital actors also become objects of comparisons (i.e., I vs. robot). To date, SDSC have not been systematically reflected in HRI. Therefore, we introduce SDSC from a theoretical perspective and reflect its significance in social robot applications. First, we conceptualize SDSC based on psychological theories and research on social comparison. Second, we illustrate the concept of SDSC for HRI using empirical data from 80 hybrid teams (two human actors and one autonomous agent) who worked together in an interdependent computer-simulated team task. SDSC in favor of the autonomous agent corresponded to functional (e.g., robot trust, or team efficacy) and dysfunctional (e.g., job threat) team-relevant variables, highlighting the two-sidedness of SDSC in hybrid teams. Third, we outline the (practical) potential of SDSC for social robots in the field and the lab.
Collapse
|
33
|
Avram O, Baraldo S, Valente A. Generalized Behavior Framework for Mobile Robots Teaming With Humans in Harsh Environments. Front Robot AI 2022; 9:898366. [PMID: 35845254 PMCID: PMC9277353 DOI: 10.3389/frobt.2022.898366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/01/2022] [Indexed: 01/09/2023] Open
Abstract
Industrial contexts, typically characterized by highly unstructured environments, where task sequences are difficult to hard-code and unforeseen events occur daily (e.g., oil and gas, energy generation, aeronautics) cannot completely rely upon automation to substitute the human dexterity and judgment skills. Robots operating in these conditions have the common requirement of being able to deploy appropriate behaviours in highly dynamic and unpredictable environments, while aiming to achieve a more natural human-robot interaction and a broad range of acceptability in providing useful and efficient services. The goal of this paper is to introduce a deliberative framework able to acquire, reuse and instantiate a collection of behaviours that promote an extension of the autonomy periods of mobile robotic platforms, with a focus on maintenance, repairing and overhaul applications. Behavior trees are employed to design the robotic system’s high-level deliberative intelligence, which integrates: social behaviors, aiming to capture the human’s emotional state and intention; the ability to either perform or support various process tasks; seamless planning and execution of human-robot shared work plans. In particular, the modularity, reactiveness and deliberation capacity that characterize the behaviour tree formalism are leveraged to interpret the human’s health and cognitive load for supporting her/him, and to complete a shared mission by collaboration or complete take-over. By enabling mobile robotic platforms to take-over risky jobs which the human cannot, should not or do not want to perform the proposed framework bears high potential to significantly improve the safety, productivity and efficiency in harsh working environments.
Collapse
|
34
|
Terblanche N, Molyn J, de Haan E, Nilsson VO. Comparing artificial intelligence and human coaching goal attainment efficacy. PLoS One 2022; 17:e0270255. [PMID: 35727801 PMCID: PMC9212136 DOI: 10.1371/journal.pone.0270255] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 06/08/2022] [Indexed: 11/22/2022] Open
Abstract
The history of artificial intelligence (AI) is filled with hype and inflated expectations. Notwithstanding, AI is finding its way into numerous aspects of humanity including the fast-growing helping profession of coaching. Coaching has been shown to be efficacious in a variety of human development facets. The application of AI in a narrow, specific area of coaching has also been shown to work. What remains uncertain, is how the two compare. In this paper we compare two equivalent longitudinal randomised control trial studies that measured the increase in clients' goal attainment as a result of having received coaching over a 10-month period. The first study involved human coaches and the replication study used an AI chatbot coach. In both studies, human coaches and the AI coach were significantly more effective in helping clients reach their goals compared to the two control groups. Surprisingly however, the AI coach was as effective as human coaches at the end of the trials. We interpret this result using AI and goal theory and present three significant implications: AI coaching could be scaled to democratize coaching; AI coaching could grow the demand for human coaching; and AI could replace human coaches who use simplistic, model-based coaching approaches. At present, AI's lack of empathy and emotional intelligence make human coaches irreplicable. However, understanding the efficacy of AI coaching relative to human coaching may promote the focused use of AI, to the significant benefit of society.
Collapse
Affiliation(s)
- Nicky Terblanche
- University of Stellenbosch Business School, Cape Town, South Africa
| | - Joanna Molyn
- University of Oxford Brookes, Oxford, United Kingdom
| | - Erik de Haan
- Ashridge Centre for Coaching, Hult International Business School, Berkhamsted (Herts.), United Kingdom
- VU University Amsterdam, Amsterdam, The Netherlands
| | - Viktor O. Nilsson
- Ashridge Centre for Coaching, Hult International Business School, Berkhamsted (Herts.), United Kingdom
| |
Collapse
|
35
|
Askarisichani O, Bullo F, Friedkin NE, Singh AK. Predictive models for human-AI nexus in group decision making. Ann N Y Acad Sci 2022; 1514:70-81. [PMID: 35581156 DOI: 10.1111/nyas.14783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Machine learning (ML) and artificial intelligence (AI) have had a profound impact on our lives. Domains like health and learning are naturally helped by human-AI interactions and decision making. In these areas, as ML algorithms prove their value in making important decisions, humans add their distinctive expertise and judgment on social and interpersonal issues that need to be considered in tandem with algorithmic inputs of information. Some questions naturally arise. What rules and regulations should be invoked on the employment of AI, and what protocols should be in place to evaluate available AI resources? What are the forms of effective communication and coordination with AI that best promote effective human-AI teamwork? In this review, we highlight factors that we believe are especially important in assembling and managing human-AI decision making in a group setting.
Collapse
Affiliation(s)
- Omid Askarisichani
- Department of Computer Science, University of California, Santa Barbara, California, USA
| | - Francesco Bullo
- Department of Mechanical Engineering, University of California, Santa Barbara, California, USA.,Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara, California, USA
| | - Noah E Friedkin
- Center for Control, Dynamical Systems and Computation, University of California, Santa Barbara, California, USA.,Department of Sociology, University of California, Santa Barbara, California, USA
| | - Ambuj K Singh
- Department of Computer Science, University of California, Santa Barbara, California, USA
| |
Collapse
|
36
|
Wesche JS, Langer M, Sonderegger A, Landers R. Editorial to the virtual Special Issue: Human-automation interaction in the workplace: A broadened scope of paradigms. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
37
|
Ma LM, IJtsma M, Feigh KM, Pritchett AR. Metrics for Human-Robot Team Design: A Teamwork Perspective on Evaluation of Human-Robot Teams. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3522581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Metrics for human-robot teaming should extend to teams consisting of multiple human and robotic agents, and to teams working in complex, dynamic work domains. This work proposes that to comprehensively analyze and evaluate multi-human, multi-robot teams, traditional HRI metrics of performance and efficiency must be expanded upon to incorporate metrics of teamwork. We develop five distinct metrics to capture both ecological and cognitive aspects of teamwork found to be important in human-automation interaction, inspired by research in the cognitive systems engineering community. We demonstrate the application of these metrics in a spacecraft maintenance case study comparing multiple human-robot team architectures. The case study demonstrates that the teamwork metrics capture aspects of human-robot interaction not apparent when using only traditional performance and efficiency metrics. The paper concludes that the proposed teamwork metrics are complementary to existing metrics in HRI and should be included in the evaluation of human-robot teams.
Collapse
|
38
|
Solberg E, Kaarstad M, Eitrheim MHR, Bisio R, Reegård K, Bloch M. A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids. GROUP & ORGANIZATION MANAGEMENT 2022. [DOI: 10.1177/10596011221081238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people’s trust in and choice to rely on “AI decision aids” can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman’s (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.’s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other’s research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
Collapse
Affiliation(s)
- Elizabeth Solberg
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Magnhild Kaarstad
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Maren H. Rø Eitrheim
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Rossella Bisio
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| | - Kine Reegård
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Marten Bloch
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
39
|
Abstract
With the increase in artificial intelligence in real-world applications, there is interest in building hybrid systems that take both human and machine predictions into account. Previous work has shown the benefits of separately combining the predictions of diverse machine classifiers or groups of people. Using a Bayesian modeling framework, we extend these results by systematically investigating the factors that influence the performance of hybrid combinations of human and machine classifiers while taking into account the unique ways human and algorithmic confidence is expressed. Artificial intelligence (AI) and machine learning models are being increasingly deployed in real-world applications. In many of these applications, there is strong motivation to develop hybrid systems in which humans and AI algorithms can work together, leveraging their complementary strengths and weaknesses. We develop a Bayesian framework for combining the predictions and different types of confidence scores from humans and machines. The framework allows us to investigate the factors that influence complementarity, where a hybrid combination of human and machine predictions leads to better performance than combinations of human or machine predictions alone. We apply this framework to a large-scale dataset where humans and a variety of convolutional neural networks perform the same challenging image classification task. We show empirically and theoretically that complementarity can be achieved even if the human and machine classifiers perform at different accuracy levels as long as these accuracy differences fall within a bound determined by the latent correlation between human and machine classifier confidence scores. In addition, we demonstrate that hybrid human–machine performance can be improved by differentiating between the errors that humans and machine classifiers make across different class labels. Finally, our results show that eliciting and including human confidence ratings improve hybrid performance in the Bayesian combination model. Our approach is applicable to a wide variety of classification problems involving human and machine algorithms.
Collapse
|
40
|
Smooth and Resilient Human–Machine Teamwork as an Industry 5.0 Design Challenge. SUSTAINABILITY 2022. [DOI: 10.3390/su14052773] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Smart machine companions such as artificial intelligence (AI) assistants and collaborative robots are rapidly populating the factory floor. Future factory floor workers will work in teams that include both human co-workers and smart machine actors. The visions of Industry 5.0 describe sustainable, resilient, and human-centered future factories that will require smart and resilient capabilities both from next-generation manufacturing systems and human operators. What kinds of approaches can help design these kinds of resilient human–machine teams and collaborations within them? In this paper, we analyze this design challenge, and we propose basing the design on the joint cognitive systems approach. The established joint cognitive systems approach can be complemented with approaches that support human centricity in the early phases of design, as well as in the development of continuously co-evolving human–machine teams. We propose approaches to observing and analyzing the collaboration in human–machine teams, developing the concept of operations with relevant stakeholders, and including ethical aspects in the design and development. We base our work on the joint cognitive systems approach and propose complementary approaches and methods, namely: actor–network theory, the concept of operations and ethically aware design. We identify their possibilities and challenges in designing and developing smooth human–machine teams for Industry 5.0 manufacturing systems.
Collapse
|
41
|
Lakhmani SG, Neubauer C, Krausman A, Fitzhugh SM, Berg SK, Wright JL, Rovira E, Blackman JJ, Schaefer KE. Cohesion in human–autonomy teams: an approach for future research. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2033876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Shan G. Lakhmani
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Catherine Neubauer
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Andrea Krausman
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Sean M. Fitzhugh
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | | | - Julia L. Wright
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Ericka Rovira
- Department of Behavioral Sciences and Leadership US Military Academy at West Point, West Point, NY, USA
| | - Jordan J. Blackman
- Department of Behavioral Sciences and Leadership US Military Academy at West Point, West Point, NY, USA
| | - Kristin E. Schaefer
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
42
|
Ulfert AS, Antoni CH, Ellwart T. The role of agent autonomy in using decision support systems at work. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.106987] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
Handke L, Klonek F, O’Neill TA, Kerschreiter R. Unpacking the Role of Feedback in Virtual Team Effectiveness. SMALL GROUP RESEARCH 2021. [DOI: 10.1177/10464964211057116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Feedback is a cornerstone of human development. Not surprisingly, it plays a vital role in team development. However, the literature examining the specific role of feedback in virtual team effectiveness remains scattered. To improve our understanding of feedback in virtual teams, we identified 59 studies that examine how different feedback characteristics (content, source, and level) impact virtual team effectiveness. Our findings suggest that virtual teams benefit particularly from feedback that (a) combines performance-related information with information on team processes and/or psychological states, (b) stems from an objective source, and (c) targets the team as a whole. By integrating the existing knowledge, we point researchers in the direction of the most pressing research needs, as well as the practices that are most likely to pay off when designing feedback interventions in virtual teams.
Collapse
|
44
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|