1
|
Bail CA. Can Generative AI improve social science? Proc Natl Acad Sci U S A 2024; 121:e2314021121. [PMID: 38722813 PMCID: PMC11127003 DOI: 10.1073/pnas.2314021121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2024] Open
Abstract
Generative AI that can produce realistic text, images, and other human-like outputs is currently transforming many different industries. Yet it is not yet known how such tools might influence social science research. I argue Generative AI has the potential to improve survey research, online experiments, automated content analyses, agent-based models, and other techniques commonly used to study human behavior. In the second section of this article, I discuss the many limitations of Generative. I examine how bias in the data used to train these tools can negatively impact social science research-as well as a range of other challenges related to ethics, replication, environmental impact, and the proliferation of low-quality research. I conclude by arguing that social scientists can address many of these limitations by creating open-source infrastructure for research on human behavior. Such infrastructure is not only necessary to ensure broad access to high-quality research tools, I argue, but also because the progress of AI will require deeper understanding of the social forces that guide human behavior.
Collapse
Affiliation(s)
- Christopher A. Bail
- Department of Sociology, Duke University, Durham, NC27708
- Department of Political Science, Duke University, Durham, NC27708
- Department of Public Policy, Duke University, Durham, NC27708
| |
Collapse
|
2
|
Shirado H, Kasahara S, Christakis NA. Emergence and collapse of reciprocity in semiautomatic driving coordination experiments with humans. Proc Natl Acad Sci U S A 2023; 120:e2307804120. [PMID: 38079552 PMCID: PMC10743379 DOI: 10.1073/pnas.2307804120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 10/10/2023] [Indexed: 12/18/2023] Open
Abstract
Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.
Collapse
Affiliation(s)
- Hirokazu Shirado
- Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15206
| | - Shunichi Kasahara
- Sony Computer Science Laboratoires, Inc., Tokyo 141-0022, Japan
- Okinawa Institute of Science and Technology Graduate University, Onna son, Okinawa 904-0412, Japan
| | - Nicholas A Christakis
- Yale Institute for Network Science, Yale University, New Haven, CT 06520
- Department of Sociology, Yale University, New Haven, CT 06520
- Department of Statistics and Data Science, Yale University, New Haven, CT 06520
| |
Collapse
|
3
|
Dietrich M, Krüger M, Weisswange TH. What should a robot disclose about me? A study about privacy-appropriate behaviors for social robots. Front Robot AI 2023; 10:1236733. [PMID: 38162995 PMCID: PMC10757370 DOI: 10.3389/frobt.2023.1236733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/17/2023] [Indexed: 01/03/2024] Open
Abstract
For robots to become integrated into our daily environment, they must be designed to gain sufficient trust of both users and bystanders. This is in particular important for social robots including those that assume the role of a mediator, working towards positively shaping relationships and interactions between individuals. One crucial factor influencing trust is the appropriate handling of personal information. Previous research on privacy has focused on data collection, secure storage, and abstract third-party disclosure risks. However, robot mediators may face situations where the disclosure of private information about one person to another specific person appears necessary. It is not clear if, how, and to what extent robots should share private information between people. This study presents an online investigation into appropriate robotic disclosure strategies. Using a vignette design, participants were presented with written descriptions of situations where a social robot reveals personal information about its owner to support pro-social human-human interaction. Participants were asked to choose the most appropriate robot behaviors, which differed in the level of information disclosure. We aimed to explore the effects of disclosure context, such as the relationship to the other person and the information content. The findings indicate that both the information content and relationship configurations significantly influence the perception of appropriate behavior but are not the sole determinants of disclosure-adequacy perception. The results also suggest that expected benefits of disclosure and individual general privacy attitudes serve as additional influential factors. These insights can inform the design of future mediating robots, enabling them to make more privacy-appropriate decisions which could foster trust and acceptance.
Collapse
Affiliation(s)
| | - Matti Krüger
- Honda Research Institute Europe GmbH, Offenbach, Germany
- Honda Research Institute Japan Co Ltd., Saitama, Japan
| | | |
Collapse
|
4
|
Shirado H, Hou YTY, Jung MF. Stingy bots can improve human welfare in experimental sharing networks. Sci Rep 2023; 13:17957. [PMID: 37864003 PMCID: PMC10589225 DOI: 10.1038/s41598-023-44883-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 10/12/2023] [Indexed: 10/22/2023] Open
Abstract
Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior: reciprocal bots, which share all resources reciprocally; and stingy bots, which share no resources at all. We also manipulate the bot's network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.
Collapse
Affiliation(s)
- Hirokazu Shirado
- School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Yoyo Tsung-Yu Hou
- Department of Information Science, Cornell University, Ithaca, NY, 14853, USA
| | - Malte F Jung
- Department of Information Science, Cornell University, Ithaca, NY, 14853, USA
| |
Collapse
|
5
|
McKee KR, Tacchetti A, Bakker MA, Balaguer J, Campbell-Gillingham L, Everett R, Botvinick M. Scaffolding cooperation in human groups with deep reinforcement learning. Nat Hum Behav 2023; 7:1787-1796. [PMID: 37679439 PMCID: PMC10593606 DOI: 10.1038/s41562-023-01686-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/24/2023] [Indexed: 09/09/2023]
Abstract
Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a 'social planner' capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (N = 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (N = 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with N = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Matthew Botvinick
- Google DeepMind, London, UK
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| |
Collapse
|
6
|
Abstract
OBJECTIVE This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. BACKGROUND Two recent trends-the development of increasingly capable automation and the flattening of organizational hierarchies-suggest a reframing of trust in automation is needed. METHOD Many publications related to human trust and human-automation interaction were integrated in this narrative literature review. RESULTS Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity. CONCLUSION This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design. APPLICATION A conceptual model comprising four concepts-situation, semiotics, strategy, and sequence-can guide future trust research and design for automation responsivity and more resilient human-automation systems.
Collapse
Affiliation(s)
| | - John D Lee
- 5228 University of Wisconsin Madison, USA
| |
Collapse
|
7
|
Correia F, Melo FS, Paiva A. When a Robot Is Your Teammate. Top Cogn Sci 2022. [PMID: 36573665 DOI: 10.1111/tops.12634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/19/2022] [Accepted: 11/02/2022] [Indexed: 12/28/2022]
Abstract
Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.
Collapse
Affiliation(s)
- Filipa Correia
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
- ITI, LARSyS, Instituto Superior Técnico, Universidade de Lisboa
| | | | - Ana Paiva
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
| |
Collapse
|
8
|
Vero: An accessible method for studying human-AI teamwork. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
9
|
Fraune MR, Komatsu T, Preusse HR, Langlois DK, Au RHY, Ling K, Suda S, Nakamura K, Tsui KM. Socially facilitative robots for older adults to alleviate social isolation: A participatory design workshop approach in the US and Japan. Front Psychol 2022; 13:904019. [DOI: 10.3389/fpsyg.2022.904019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 09/16/2022] [Indexed: 11/13/2022] Open
Abstract
Social technology can improve the quality of older adults' social lives and mitigate negative mental and physical health outcomes associated with loneliness, but it should be designed collaboratively with this population. In this paper, we used participatory design (PD) methods to investigate how robots might be used as social facilitators for middle-aged and older adults (age 50+) in both the US and Japan. We conducted PD workshops in the US and Japan because both countries are concerned about the social isolation of these older adults due to their rapidly aging populations. We developed a novel approach to participatory design of future technologies that spends 2/3 of the PD session asking participants about their own life experiences as a foundation. This grounds the conversation in reality, creates rapport among the participants, and engages them in creative critical thinking. Then, we build upon this foundation, pose an abstract topic, and ask participants to brainstorm on the topic based on their previous discussion. In both countries, participants were eager to actively discuss design ideas for socially facilitative robots and imagine how they might improve their social lives. US participants suggested design ideas for telepresence robots, social distancing robots, and social skills artificial intelligence programs, while Japanese participants suggested ideas for pet robots, robots for sharing experiences, and easy-to-operate instructor robots. Comparing these two countries, we found that US participants saw robots as tools to help facilitate their social connections, while Japanese participants envisioned robots to function as surrogate companions for their parents and distract them from loneliness when they were unavailable. With this paper, we contribute to the literature in two main ways, presenting: (1) A novel approach to participatory design of future technologies that grounds participants in their everyday experience, and (2) Results of the study indicating how middle-aged and older adults from the US and Japan wanted technologies to improve their social lives. Although we conducted the workshops during the COVID-19 pandemic, many findings generalized to other situations related to social isolation, such as older adults living alone.
Collapse
|
10
|
Bogert E, Lauharatanahirun N, Schecter A. Human preferences toward algorithmic advice in a word association task. Sci Rep 2022; 12:14501. [PMID: 36008508 PMCID: PMC9411628 DOI: 10.1038/s41598-022-18638-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 08/16/2022] [Indexed: 11/09/2022] Open
Abstract
Algorithms provide recommendations to human decision makers across a variety of task domains. For many problems, humans will rely on algorithmic advice to make their choices and at times will even show complacency. In other cases, humans are mistrustful of algorithmic advice, or will hold algorithms to higher standards of performance. Given the increasing use of algorithms to support creative work such as text generation and brainstorming, it is important to understand how humans will respond to algorithms in those scenarios—will they show appreciation or aversion? This study tests the effects of algorithmic advice for a word association task, the remote associates test (RAT). The RAT task is an established instrument for testing critical and creative thinking with respect to multiple word association. We conducted a preregistered online experiment (154 participants, 2772 observations) to investigate whether humans had stronger reactions to algorithmic or crowd advice when completing multiple instances of the RAT. We used an experimental format in which subjects see a question, answer the question, then receive advice and answer the question a second time. Advice was provided in multiple formats, with advice varying in quality and questions varying in difficulty. We found that individuals receiving algorithmic advice changed their responses 13\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}% more frequently (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\chi ^{2} = 59.06$$\end{document}χ2=59.06, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p < 0.001$$\end{document}p<0.001) and reported greater confidence in their final solutions. However, individuals receiving algorithmic advice also were 13\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}% less likely to identify the correct solution (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\chi ^{2} = 58.79$$\end{document}χ2=58.79, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p < 0.001$$\end{document}p<0.001). This study highlights both the promises and pitfalls of leveraging algorithms to support creative work.
Collapse
Affiliation(s)
- Eric Bogert
- Department of Supply Chain and Information Management, Northeastern University, Boston, MA, 02115, USA
| | - Nina Lauharatanahirun
- Departments of Biomedical Engineering and Biobehavioral Health, Pennsylvania State University, University Park, PA, 16802, USA
| | - Aaron Schecter
- Department of Management Information Systems, University of Georgia, Athens, GA, 30602, USA.
| |
Collapse
|
11
|
Industry 4.0: A Chance or a Threat for Gen Z? The Perspective of Economics Students. SUSTAINABILITY 2022. [DOI: 10.3390/su14148925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Major transformations in the sphere of the economy that Industry 4.0 brings are also reflected in young people’s expectations regarding the development of their professional career. Existing social relations are being modified nowadays and new concepts of building them are being developed. The aim of the present article is to present the expectations, fears and hopes of young people related to the course of Industrial Revolution 4.0 in the context of their future life. For a simpler perception of the research objectives of students, the research was narrowed down to the topic of building relationships with robots, which are one of the pillars of Industry 4.0. The research methods are based on the literature studies and an experiment conducted among the students graduating from economic faculties and entering a strongly changing labour market. The experiment was qualitative. The students wrote a short essay on the topic of whether a friendship between a human and a robot is possible. One group of students was shown a short emotional clip about the relationship between the boy and the robot. Regardless of the attempt to influence the message with a film, both groups of students hardly noticed the negative effects of digitisation on building relationships and social trust. The relationship between human being and advanced technology will develop in the future, which will result in the emergence of new relationships between humans and artificial intelligence.
Collapse
|
12
|
Wolf FD, Stock-Homburg RM. How and When Can Robots Be Team Members? Three Decades of Research on Human–Robot Teams. GROUP & ORGANIZATION MANAGEMENT 2022. [DOI: 10.1177/10596011221076636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Artificial intelligence and robotic technologies have grown in sophistication and reach. Accordingly, research into mixed human–robot teams that comprise both robots and humans has expanded as well, attracting the attention of researchers from different disciplines, such as organizational behavior, human–robot interaction, cognitive science, and robotics. With this systematic literature review, the authors seek to establish deeper insights into existing research and sharpen the definitions of relevant terms. With a close consideration of 150 studies published between 1990 and 2020 that investigate mixed human–robot teams, conceptually or empirically, this article provides both a systematic evaluation of extant research and propositions for further research.
Collapse
Affiliation(s)
- Franziska Doris Wolf
- Chair for Marketing and Human Resource Management, Technical University of Darmstadt, Darmstadt, Germany
| | - Ruth Maria Stock-Homburg
- Chair for Marketing and Human Resource Management, Technical University of Darmstadt, Darmstadt, Germany
| |
Collapse
|
13
|
Brinkmann L, Gezerli D, Kleist KV, Müller TF, Rahwan I, Pescetelli N. Hybrid social learning in human-algorithm cultural transmission. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20200426. [PMID: 35599570 PMCID: PMC9126184 DOI: 10.1098/rsta.2020.0426] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- L. Brinkmann
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - D. Gezerli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - K. V. Kleist
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - T. F. Müller
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - I. Rahwan
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - N. Pescetelli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
- Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
14
|
Brinkmann L, Gezerli D, Kleist KV, Müller TF, Rahwan I, Pescetelli N. Hybrid social learning in human-algorithm cultural transmission. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022. [PMID: 35599570 DOI: 10.6084/m9.figshare.c.5885349] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- L Brinkmann
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - D Gezerli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - K V Kleist
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - T F Müller
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - I Rahwan
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - N Pescetelli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
- Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
15
|
Fan J, Mion LC, Beuscher L, Ullal A, Newhouse PA, Sarkar N. SAR-Connect: A Socially Assistive Robotic System to Support Activity and Social Engagement of Older Adults. IEEE T ROBOT 2022; 38:1250-1269. [PMID: 36204285 PMCID: PMC9531900 DOI: 10.1109/tro.2021.3092162] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Multi-domain activities that incorporate physical, cognitive, and social stimuli can enhance older adults' overall health and quality of life. Several robotic platforms have been developed to provide these therapies in a quantifiable manner to complement healthcare personnel in resource-strapped long-term care settings. However, these platforms are primarily limited to one-to-one human robot interaction (HRI) and thus do not enhance social interaction. In this paper, we present a novel HRI framework and a realized platform called SAR-Connect to foster robot-mediated social interaction among older adults through carefully designed tasks that also incorporate physical and cognitive stimuli. SAR-Connect seamlessly integrates a humanoid robot with a virtual reality-based activity platform and a multimodal data acquisition module including game interaction, audio, visual and electroencephalography responses of the participants. Results from a laboratory-based user study with older adults indicates the potential of SAR-Connect that showed this system could 1) involve one or multiple older adults to perform multi-domain activities and provide dynamic guidance, 2) engage them in the robot-mediated task and foster human-human interaction, and 3) quantify their social and activity engagement from multiple sensory modalities.
Collapse
Affiliation(s)
- Jing Fan
- Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN 37212 USA
| | - Lorraine C. Mion
- Center of Excellence in Critical and Complex Care, College of Nursing, The Ohio State University, OH 43210 USA
| | - Linda Beuscher
- Vanderbilt University School of Nursing, Nashville, TN 37204 USA
| | - Akshith Ullal
- Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN 37212 USA
| | - Paul A. Newhouse
- Center for Cognitive Medicine, Department of Psychiatry and Behavioral Sciences, Vanderbilt University, Geriatric Research Education and Clinical Center (GRECC), Tennessee Valley Veterans Affairs Medical Center, Nashville, TN 37212 USA
| | - Nilanjan Sarkar
- Mechanical Engineering Department, Electrical Engineering and Computer Science Department, Vanderbilt University, Nashville, TN 37212 USA
| |
Collapse
|
16
|
Lakhmani SG, Neubauer C, Krausman A, Fitzhugh SM, Berg SK, Wright JL, Rovira E, Blackman JJ, Schaefer KE. Cohesion in human–autonomy teams: an approach for future research. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2033876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Shan G. Lakhmani
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Catherine Neubauer
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Andrea Krausman
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Sean M. Fitzhugh
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | | | - Julia L. Wright
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| | - Ericka Rovira
- Department of Behavioral Sciences and Leadership US Military Academy at West Point, West Point, NY, USA
| | - Jordan J. Blackman
- Department of Behavioral Sciences and Leadership US Military Academy at West Point, West Point, NY, USA
| | - Kristin E. Schaefer
- Human Research and Engineering Directorate US Army DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
17
|
|
18
|
Langer A, Levy-Tzedek S. Emerging Roles for Social Robots in Rehabilitation. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3462256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Insights from social and cognitive neuroscience should inform the design of socially assistive robots for neurorehabilitation as novel roles emerge for them in human-human interactions.
Collapse
Affiliation(s)
- Allison Langer
- Perelman School of Medicine, Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA, United States
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg, Freiburg, Germany
| |
Collapse
|
19
|
Thomas P, Czerwinksi M, Mcduff D, Craswell N. Theories of Conversation for Conversational IR. ACM T INFORM SYST 2021. [DOI: 10.1145/3439869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Conversational information retrieval is a relatively new and fast-developing research area, but conversation itself has been well studied for decades. Researchers have analysed linguistic phenomena such as structure and semantics but also paralinguistic features such as tone, body language, and even the physiological states of interlocutors. We tend to treat computers as social agents—especially if they have some humanlike features in their design—and so work from human-to-human conversation is highly relevant to how we think about the design of human-to-computer applications. In this article, we summarise some salient past work, focusing on social norms; structures; and affect, prosody, and style. We examine social communication theories briefly as a review to see what we have learned about how humans interact with each other and how that might pertain to agents and robots. We also discuss some implications for research and design of conversational IR systems.
Collapse
|
20
|
Almaatouq A, Becker J, Houghton JP, Paton N, Watts DJ, Whiting ME. Empirica: a virtual lab for high-throughput macro-level experiments. Behav Res Methods 2021; 53:2158-2171. [PMID: 33782900 PMCID: PMC8516782 DOI: 10.3758/s13428-020-01535-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/30/2020] [Indexed: 12/27/2022]
Abstract
Virtual labs allow researchers to design high-throughput and macro-level experiments that are not feasible in traditional in-person physical lab settings. Despite the increasing popularity of online research, researchers still face many technical and logistical barriers when designing and deploying virtual lab experiments. While several platforms exist to facilitate the development of virtual lab experiments, they typically present researchers with a stark trade-off between usability and functionality. We introduce Empirica: a modular virtual lab that offers a solution to the usability-functionality trade-off by employing a "flexible defaults" design strategy. This strategy enables us to maintain complete "build anything" flexibility while offering a development platform that is accessible to novice programmers. Empirica's architecture is designed to allow for parameterizable experimental designs, reusable protocols, and rapid development. These features will increase the accessibility of virtual lab experiments, remove barriers to innovation in experiment design, and enable rapid progress in the understanding of human behavior.
Collapse
Affiliation(s)
| | | | | | - Nicolas Paton
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| | | | | |
Collapse
|
21
|
An Evaluation of Human Conversational Preferences in Social Human-Robot Interaction. Appl Bionics Biomech 2021; 2021:3648479. [PMID: 33680073 PMCID: PMC7925063 DOI: 10.1155/2021/3648479] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 09/18/2020] [Accepted: 01/29/2021] [Indexed: 12/02/2022] Open
Abstract
To generate context-aware behaviors in robots, robots are required to have a careful evaluation of its encounters with humans. Unwrapping emotional hints in observable cues in an encounter will improve a robot's etiquettes in a social encounter. This article presents an extended human study conducted to examine how several factors in an encounter influence a person's preferences upon an interaction at a particular moment. We analyzed the nature of conversation preferred by a user considering the type of conversation a robot could have with its user, having the interaction initiated by the robot itself. We took an effort to explore how such preferences differ as the factors present in the surrounding alter. A social robot equipped with the capability to initiate a conversation is deployed to conduct the study by means of a wizard-of-oz (WoZ) experiment. During this study, conversational preferences of users could vary from “no interaction at all” to a “long conversation.” We changed three factors in an encounter which can be different from each other in each circumstance: the audience or outsiders in the environment, user's task, and the domestic area in which the interaction takes place. Conversational preferences of users within the abovementioned conditions were analyzed in a later stage, and critical observations are highlighted. Finally, implications that could be helpful in shaping future social human-robot encounters were derived from the analysis of the results.
Collapse
|
22
|
Henschel A, Laban G, Cross ES. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. CURRENT ROBOTICS REPORTS 2021; 2:9-19. [PMID: 34977592 PMCID: PMC7860159 DOI: 10.1007/s43154-020-00035-0] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/17/2022]
Abstract
Purpose of Review We provide an outlook on the definitions, laboratory research, and applications of social robots, with an aim to understand what makes a robot social—in the eyes of science and the general public. Recent Findings Social robots demonstrate their potential when deployed within contexts appropriate to their form and functions. Some examples include companions for the elderly and cognitively impaired individuals, robots within educational settings, and as tools to support cognitive and behavioural change interventions. Summary Science fiction has inspired us to conceive of a future with autonomous robots helping with every aspect of our daily lives, although the robots we are familiar with through film and literature remain a vision of the distant future. While there are still miles to go before robots become a regular feature within our social spaces, rapid progress in social robotics research, aided by the social sciences, is helping to move us closer to this reality.
Collapse
Affiliation(s)
- Anna Henschel
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Guy Laban
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S Cross
- Institute of Neuroscience and Psychology, Department of Psychology, University of Glasgow, Glasgow, Scotland.,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
23
|
Sebo S, Dong LL, Chang N, Lewkowicz M, Schutzman M, Scassellati B. The Influence of Robot Verbal Support on Human Team Members: Encouraging Outgroup Contributions and Suppressing Ingroup Supportive Behavior. Front Psychol 2021; 11:590181. [PMID: 33424708 PMCID: PMC7793683 DOI: 10.3389/fpsyg.2020.590181] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/24/2020] [Indexed: 11/13/2022] Open
Abstract
As teams of people increasingly incorporate robot members, it is essential to consider how a robot's actions may influence the team's social dynamics and interactions. In this work, we investigated the effects of verbal support from a robot (e.g., “good idea Salim,” “yeah”) on human team members' interactions related to psychological safety and inclusion. We conducted a between-subjects experiment (N = 39 groups, 117 participants) where the robot team member either (A) gave verbal support or (B) did not give verbal support to the human team members of a human-robot team comprised of 2 human ingroup members, 1 human outgroup member, and 1 robot. We found that targeted support from the robot (e.g., “good idea George”) had a positive effect on outgroup members, who increased their verbal participation after receiving targeted support from the robot. When comparing groups that did and did not have verbal support from the robot, we found that outgroup members received fewer verbal backchannels from ingroup members if their group had robot verbal support. These results suggest that verbal support from a robot may have some direct benefits to outgroup members but may also reduce the obligation ingroup members feel to support the verbal contributions of outgroup members.
Collapse
Affiliation(s)
- Sarah Sebo
- Department of Computer Science, University of Chicago, Chicago, IL, United States.,Department of Computer Science, Yale University, New Haven, CT, United States
| | - Ling Liang Dong
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Nicholas Chang
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Michal Lewkowicz
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Michael Schutzman
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT, United States
| |
Collapse
|
24
|
Mikawa M, Chen H, Fujisawa M. Face Memorization Using AIM Model for Mobile Robot and Its Application to Name Calling Function. SENSORS 2020; 20:s20226629. [PMID: 33228069 PMCID: PMC7699396 DOI: 10.3390/s20226629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 11/09/2020] [Accepted: 11/13/2020] [Indexed: 11/16/2022]
Abstract
We are developing a social mobile robot that has a name calling function using a face memorization system. It is said that it is an important function for a social robot to call to a person by her/his name, and the name calling can make a friendly impression of the robot on her/him. Our face memorization system has the following features: (1) When the robot detects a stranger, it stores her/his face images and name after getting her/his permission. (2) The robot can call to a person whose face it has memorized by her/his name. (3) The robot system has a sleep–wake function, and a face classifier is re-trained in a REM sleep state, or execution frequencies of information processes are reduced when it has nothing to do, for example, when there is no person around the robot. In this paper, we confirmed the performance of these functions and conducted an experiment to evaluate the impression of the name calling function with research participants. The experimental results revealed the validity and effectiveness of the proposed face memorization system.
Collapse
|
25
|
Bonnefon JF, Rahwan I. Machine Thinking, Fast and Slow. Trends Cogn Sci 2020; 24:1019-1027. [PMID: 33129719 DOI: 10.1016/j.tics.2020.09.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 09/11/2020] [Accepted: 09/14/2020] [Indexed: 11/17/2022]
Abstract
Machines do not 'think fast and slow' in the sense that humans do in dual-process models of cognition. However, the people who create the machines may attempt to emulate or simulate these fast and slow modes of thinking, which will in turn affect the way end users relate to these machines. In this opinion article we consider the complex interplay in the way various stakeholders (engineers, user experience designers, regulators, ethicists, and end users) can be inspired, challenged, or misled by the analogy between the fast and slow thinking of humans and the Fast and Slow Thinking of machines.
Collapse
Affiliation(s)
- Jean-François Bonnefon
- Toulouse School of Economics (TSM-R), CNRS, Université Toulouse Capitole, Toulouse, France.
| | - Iyad Rahwan
- Center for Humans and Machines, Max-Planck Institute for Human Development, Berlin, Germany.
| |
Collapse
|
26
|
Sætra HS. The foundations of a policy for the use of social robots in care. TECHNOLOGY IN SOCIETY 2020; 63:101383. [PMID: 32921851 PMCID: PMC7474838 DOI: 10.1016/j.techsoc.2020.101383] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/24/2020] [Accepted: 09/04/2020] [Indexed: 05/05/2023]
Abstract
Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.
Collapse
|
27
|
Shirado H, Christakis NA. Network Engineering Using Autonomous Agents Increases Cooperation in Human Groups. iScience 2020; 23:101438. [PMID: 32823053 PMCID: PMC7452167 DOI: 10.1016/j.isci.2020.101438] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 07/13/2020] [Accepted: 08/03/2020] [Indexed: 11/29/2022] Open
Abstract
Cooperation in human groups is challenging, and various mechanisms are required to sustain it, although it nevertheless usually decays over time. Here, we perform theoretically informed experiments involving networks of humans (1,024 subjects in 64 networks) playing a public-goods game to which we sometimes added autonomous agents (bots) programmed to use only local knowledge. We show that cooperation can not only be stabilized, but even promoted, when the bots intervene in the partner selections made by the humans, re-shaping social connections locally within a larger group. Cooperation rates increased from 60.4% at baseline to 79.4% at the end. This network-intervention strategy outperformed other strategies, such as adding bots playing tit-for-tat. We also confirm that even a single bot can foster cooperation in human groups by using a mixed strategy designed to support the development of cooperative clusters. Simple artificial intelligence can increase the cooperation of groups.
Collapse
Affiliation(s)
- Hirokazu Shirado
- School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Nicholas A Christakis
- Yale Institute for Network Science, Yale University, New Haven, CT 06520, USA; Department of Sociology, Yale University, New Haven, CT 06520, USA; Department of Ecology & Evolutionary Biology, Yale University, New Haven, CT 06511, USA; Department of Biomedical Engineering, Yale University, New Haven, CT 06520, USA
| |
Collapse
|
28
|
|