1
|
Murawski A, Ramirez-Zohfeld V, Mell J, Tschoe M, Schierer A, Olvera C, Brett J, Gratch J, Lindquist LA. NegotiAge: Development and pilot testing of an artificial intelligence-based family caregiver negotiation program. J Am Geriatr Soc 2024; 72:1112-1121. [PMID: 38217356 PMCID: PMC11018462 DOI: 10.1111/jgs.18775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/02/2023] [Accepted: 12/23/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND Family caregivers of people with Alzheimer's disease experience conflicts as they navigate health care but lack training to resolve these disputes. We sought to develop and pilot test an artificial-intelligence negotiation training program, NegotiAge, for family caregivers. METHODS We convened negotiation experts, a geriatrician, a social worker, and community-based family caregivers. Content matter experts created short videos to teach negotiation skills. Caregivers generated dialogue surrounding conflicts. Computer scientists utilized the dialogue with the Interactive Arbitration Guide Online (IAGO) platform to develop avatar-based agents (e.g., sibling, older adult, physician) for caregivers to practice negotiating. Pilot testing was conducted with family caregivers to assess usability (USE) and satisfaction (open-ended questions with thematic analysis). RESULTS Development: With NegotiAge, caregivers progress through didactic material, then receive scenarios to negotiate (e.g., physician recommends gastric tube, sibling disagrees with home support, older adult refusing support). Caregivers negotiate in real-time with avatars who are designed to act like humans, including emotional tactics and irrational behaviors. Caregivers send/receive offers, using tactics until either mutual agreement or time expires. Immediate feedback is generated for the user to improve skills training. Pilot testing: Family caregivers (n = 12) completed the program and survey. USE questionnaire (Likert scale 1-7) subset scores revealed: (1) Useful-Mean 5.69 (SD 0.76); (2) Ease-Mean 5.24 (SD 0.96); (3) Learn-Mean 5.69 (SD 0.74); (4) Satisfy-Mean 5.62 (SD 1.10). Items that received over 80% agreements were: It helps me be more effective; It helps me be more productive; It is useful; It gives me more control over the activities in my life; It makes the things I want to accomplish easier to get done. Participants were highly satisfied and found NegotiAge fun to use (91.7%), with 100% who would recommend it to a friend. CONCLUSION NegotiAge is an Artificial-Intelligent Caregiver Negotiation Program, that is usable and feasible for family caregivers to become familiar with negotiating conflicts commonly seen in health care.
Collapse
Affiliation(s)
- Alaine Murawski
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| | - Vanessa Ramirez-Zohfeld
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| | - Johnathan Mell
- University of Central Florida, Department of Computer Science; Orlando, FL, USA
| | - Marianne Tschoe
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| | - Allison Schierer
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| | - Charles Olvera
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| | - Jeanne Brett
- Northwestern University, Kellogg School of Management; Evanston, IL USA
| | - Jonathan Gratch
- University of Southern California, Viterbi School of Engineering; Los Angeles, CA, USA
| | - Lee A. Lindquist
- Division of Geriatrics; Northwestern University, Feinberg School of Medicine; Chicago, IL, USA
| |
Collapse
|
2
|
Kappas A, Gratch J. These Aren't The Droids You Are Looking for: Promises and Challenges for the Intersection of Affective Science and Robotics/AI. Affect Sci 2023; 4:580-585. [PMID: 37744970 PMCID: PMC10514249 DOI: 10.1007/s42761-023-00211-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 07/19/2023] [Indexed: 09/26/2023]
Abstract
AI research focused on interactions with humans, particularly in the form of robots or virtual agents, has expanded in the last two decades to include concepts related to affective processes. Affective computing is an emerging field that deals with issues such as how the diagnosis of affective states of users can be used to improve such interactions, also with a view to demonstrate affective behavior towards the user. This type of research often is based on two beliefs: (1) artificial emotional intelligence will improve human computer interaction (or more specifically human robot interaction), and (2) we understand the role of affective behavior in human interaction sufficiently to tell artificial systems what to do. However, within affective science the focus of research is often to test a particular assumption, such as "smiles affect liking." Such focus does not provide the information necessary to synthesize affective behavior in long dynamic and real-time interactions. In consequence, theories do not play a large role in the development of artificial affective systems by engineers, but self-learning systems develop their behavior out of large corpora of recorded interactions. The status quo is characterized by measurement issues, theoretical lacunae regarding prevalence and functions of affective behavior in interaction, and underpowered studies that cannot provide the solid empirical foundation for further theoretical developments. This contribution will highlight some of these challenges and point towards next steps to create a rapprochement between engineers and affective scientists with a view to improving theory and solid applications.
Collapse
Affiliation(s)
- Arvid Kappas
- Constructor University, Campus Ring 1, 28759 Bremen, Germany
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
3
|
Gratch J. The promise and peril of interactive embodied agents for studying non-verbal communication: a machine learning perspective. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210475. [PMID: 36871588 PMCID: PMC9985969 DOI: 10.1098/rstb.2021.0475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 01/27/2023] [Indexed: 03/07/2023] Open
Abstract
In face-to-face interactions, parties rapidly react and adapt to each other's words, movements and expressions. Any science of face-to-face interaction must develop approaches to hypothesize and rigorously test mechanisms that explain such interdependent behaviour. Yet conventional experimental designs often sacrifice interactivity to establish experimental control. Interactive virtual and robotic agents have been offered as a way to study true interactivity while enforcing a measure of experimental control by allowing participants to interact with realistic but carefully controlled partners. But as researchers increasingly turn to machine learning to add realism to such agents, they may unintentionally distort the very interactivity they seek to illuminate, particularly when investigating the role of non-verbal signals such as emotion or active-listening behaviours. Here I discuss some of the methodological challenges that may arise when machine learning is used to model the behaviour of interaction partners. By articulating and explicitly considering these commitments, researchers can transform 'unintentional distortions' into valuable methodological tools that yield new insights and better contextualize existing experimental findings that rely on learning technology. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Jonathan Gratch
- Department of Computer Science, University of Southern California, Los Angeles, CA 90292, USA
| |
Collapse
|
4
|
Murawski A, Ramirez-Zohfeld V, Schierer A, Olvera C, Mell J, Gratch J, Brett J, Lindquist LA. Transforming a Negotiation Framework to Resolve Conflicts among Older Adults and Family Caregivers. Geriatrics (Basel) 2023; 8:36. [PMID: 36960991 PMCID: PMC10037562 DOI: 10.3390/geriatrics8020036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/18/2023] [Accepted: 03/03/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Family caregivers of older people with Alzheimer's dementia (PWD) often need to advocate and resolve health-related conflicts (e.g., determining treatment necessity, billing errors, and home health extensions). As they deal with these health system conflicts, family caregivers experience unnecessary frustration, anxiety, and stress. The goal of this research was to apply a negotiation framework to resolve real-world family caregiver-older adult conflicts. METHODS We convened an interdisciplinary team of national community-based family caregivers, social workers, geriatricians, and negotiation experts (n = 9; Illinois, Florida, New York, and California) to examine the applicability of negotiation and conflict management frameworks to three older adult-caregiver conflicts (i.e., caregiver-older adult, caregiver-provider, and caregiver-caregiver). The panel of caregivers provided scenarios and dialogue describing conflicts they experienced in these three settings. A qualitative analysis was then performed grouping the responses into a framework matrix. RESULTS Upon presenting the three conflicts to the caregivers, 96 responses (caregiver-senior), 75 responses (caregiver-caregiver), and 80 responses (caregiver-provider) were generated. A thematic analysis showed that the statements and responses fit the interest-rights-power (IRP) negotiation framework. DISCUSSION The interests-rights-power (IRP) framework, used in business negotiations, provided insight into how caregivers experienced conflict with older adults, providers, and other caregivers. Future research is needed to examine applying the IRP framework in the training of caregivers of older people with Alzheimer's dementia.
Collapse
Affiliation(s)
- Alaine Murawski
- Division of Geriatrics, Feinberg School of Medicine, Northwestern University, Chicago, IL 60208, USA
| | - Vanessa Ramirez-Zohfeld
- Division of Geriatrics, Feinberg School of Medicine, Northwestern University, Chicago, IL 60208, USA
| | - Allison Schierer
- Division of Geriatrics, Feinberg School of Medicine, Northwestern University, Chicago, IL 60208, USA
| | - Charles Olvera
- Division of Geriatrics, Feinberg School of Medicine, Northwestern University, Chicago, IL 60208, USA
| | - Johnathan Mell
- School of Computer Science, University of Central Florida, Orlando, FL 32816, USA
| | - Jonathan Gratch
- Institute of Creative Technologies, University of Southern California, Los Angeles, CA 90007, USA
| | - Jeanne Brett
- Kellogg School of Business, Northwestern University, Evanston, IL 60208, USA
| | - Lee A. Lindquist
- Division of Geriatrics, Feinberg School of Medicine, Northwestern University, Chicago, IL 60208, USA
| |
Collapse
|
5
|
Gratch J, Fast NJ. The power to harm: AI assistants pave the way to unethical behavior. Curr Opin Psychol 2022; 47:101382. [PMID: 35830764 DOI: 10.1016/j.copsyc.2022.101382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 05/20/2022] [Accepted: 06/01/2022] [Indexed: 11/03/2022]
Abstract
Advances in artificial intelligence (AI) enable new ways of exercising and experiencing power by automating interpersonal tasks such as interviewing and hiring workers, managing and evaluating work, setting compensation, and negotiating deals. As these techniques become more sophisticated, they increasingly support personalization where users can "tell" their AI assistants not only what to do, but how to do it: in effect, dictating the ethical values that govern the assistant's behavior. Importantly, these new forms of power could bypass existing social and regulatory checks on unethical behavior by introducing a new agent into the equation. Organization research suggests that acting through human agents (i.e., the problem of indirect agency) can undermine ethical forecasting such that actors believe they are acting ethically, yet a) show less benevolence for the recipients of their power, b) receive less blame for ethical lapses, and c) anticipate less retribution for unethical behavior. We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents. We conclude by examining boundary conditions and discussing potential directions for future research.
Collapse
|
6
|
|
7
|
Dukes D, Abrams K, Adolphs R, Ahmed ME, Beatty A, Berridge KC, Broomhall S, Brosch T, Campos JJ, Clay Z, Clément F, Cunningham WA, Damasio A, Damasio H, D’Arms J, Davidson JW, de Gelder B, Deonna J, de Sousa R, Ekman P, Ellsworth PC, Fehr E, Fischer A, Foolen A, Frevert U, Grandjean D, Gratch J, Greenberg L, Greenspan P, Gross JJ, Halperin E, Kappas A, Keltner D, Knutson B, Konstan D, Kret ME, LeDoux JE, Lerner JS, Levenson RW, Loewenstein G, Manstead ASR, Maroney TA, Moors A, Niedenthal P, Parkinson B, Pavlidis L, Pelachaud C, Pollak SD, Pourtois G, Roettger-Roessler B, Russell JA, Sauter D, Scarantino A, Scherer KR, Stearns P, Stets JE, Tappolet C, Teroni F, Tsai J, Turner J, Van Reekum C, Vuilleumier P, Wharton T, Sander D. The rise of affectivism. Nat Hum Behav 2021; 5:816-820. [PMID: 34112980 PMCID: PMC8319089 DOI: 10.1038/s41562-021-01130-8] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Research over the past decades has demonstrated the explanatory power of emotions, feelings, motivations, moods, and other affective processes when trying to understand and predict how we think and behave. In this consensus article, we ask: has the increasingly recognized impact of affective phenomena ushered in a new era, the era of affectivism?
Collapse
Affiliation(s)
- Daniel Dukes
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Special Education, University of Fribourg, Fribourg, Switzerland,;
| | - Kathryn Abrams
- Berkeley Law School, University of California, Berkeley, Berkeley, CA, USA
| | - Ralph Adolphs
- Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Mohammed E. Ahmed
- Department of Computer Science, University of Houston, Houston, TX, USA
| | - Andrew Beatty
- Department of Anthropology, Brunel University London, London, UK
| | - Kent C. Berridge
- Department of Psychology, University of Michigan, Ann Arbor, MI, USA
| | - Susan Broomhall
- Australian Research Council Centre of Excellence for History of Emotions, Australian Catholic University, Perth, Western Australia, Australia
| | - Tobias Brosch
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Psychology, FPSE, University of Geneva, Geneva, Switzerland
| | - Joseph J. Campos
- Institute of Human Development, University of California, Berkeley, Berkeley, CA,USA
| | - Zanna Clay
- Department of Psychology, Durham University, Durham, UK
| | - Fabrice Clément
- Cognitive Science Centre, University of Neuchâtel, Neuchâtel, Switzerland
| | | | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, USA
| | - Hanna Damasio
- Dornsife Cognitive Neuroscience Imaging Center, University of Southern California, Los Angeles, CA, USA
| | - Justin D’Arms
- Department of Philosophy, Ohio State University, Columbus, OH, USA
| | - Jane W. Davidson
- Australian Research Council Centre of Excellence for History of Emotions, University of Melbourne, Melbourne, Victoria, Australia
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands,Department of Computer Science, University College London, London, UK
| | - Julien Deonna
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Philosophy, University of Geneva, Geneva, Switzerland
| | - Ronnie de Sousa
- Department of Philosophy, University of Toronto, Toronto, Ontario, Canada
| | - Paul Ekman
- Department of Psychology, University of California, San Francisco, San Francisco, CA, USA,Paul Ekman Group, San Francisco, CA, USA
| | | | - Ernst Fehr
- Department of Economics, University of Zurich, Zurich, Switzerland
| | - Agneta Fischer
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Ad Foolen
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Ute Frevert
- Max Planck Institute for Human Development, Berlin, Germany
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Psychology, FPSE, University of Geneva, Geneva, Switzerland
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, USA
| | - Leslie Greenberg
- Department of Psychology, York University, Toronto, Ontario, Canada
| | | | - James J. Gross
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Eran Halperin
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Arvid Kappas
- Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Brian Knutson
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - David Konstan
- Department of Classics, New York University, New York, NY, USA
| | - Mariska E. Kret
- Cognitive Psychology Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Joseph E. LeDoux
- Center for Neural Science, New York University, New York, NY, USA
| | - Jennifer S. Lerner
- Harvard Kennedy School and Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Robert W. Levenson
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - George Loewenstein
- Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, PA, USA
| | | | - Terry A. Maroney
- Vanderbilt University Law School, Vanderbilt University, Nashville, TN, USA
| | - Agnes Moors
- Department of Psychology, KU Leuven, Leuven, Belgium
| | - Paula Niedenthal
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Brian Parkinson
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - loannis Pavlidis
- Department of Computer Science, University of Houston, Houston, TX, USA
| | - Catherine Pelachaud
- CNRS-Institut des Systèmes Intelligents et de Robotique, Sorbonne University, Paris, France
| | - Seth D. Pollak
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Gilles Pourtois
- Department of Experimental, Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | | | - James A. Russell
- Department of Psychology and Neuroscience, Boston College, Boston, MA, USA
| | - Disa Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Klaus R. Scherer
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Psychology, University of Munich, Munich, Germany
| | - Peter Stearns
- Department of History, George Mason University, Fairfax, VA, USA
| | - Jan E. Stets
- Department of Sociology, University of California, Riverside, Riverside, CA, USA
| | - Christine Tappolet
- Département de Philosophie, Université de Montreal, Montréal, Québec, Canada
| | - Fabrice Teroni
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Philosophy, University of Geneva, Geneva, Switzerland
| | - Jeanne Tsai
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Jonathan Turner
- Department of Sociology, University of California, Riverside, Riverside, CA, USA
| | - Carien Van Reekum
- School of Psychology and Clinical Language Sciences, University of Reading, Reading UK
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Neuroscience, University Medical School, University of Geneva, Geneva, Switzerland
| | - Tim Wharton
- School of Humanities, University of Brighton, Brighton, UK
| | - David Sander
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland,Department of Psychology, FPSE, University of Geneva, Geneva, Switzerland,;
| |
Collapse
|
8
|
Abstract
As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.
Collapse
Affiliation(s)
- Celso M de Melo
- CCDC US Army Research Laboratory, Playa Vista, CA, United States
| | - Stacy Marsella
- College of Computer and Information Science, Northeastern University, Boston, MA, United States
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern, Playa Vista, CA, United States
| |
Collapse
|
9
|
Abstract
Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences.
The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience.
This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study #1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study #2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study #3, we show that the lessons learned from the empirical observations in Study #1 and #2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.
Collapse
|
10
|
Rychlowska M, van der Schalk J, Gratch J, Breitinger E, Manstead AS. Beyond actions: Reparatory effects of regret in intergroup trust games. Journal of Experimental Social Psychology 2019. [DOI: 10.1016/j.jesp.2019.01.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
11
|
Abstract
Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show that this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society.
Collapse
Affiliation(s)
- Celso M de Melo
- Sensors and Electron Devices Directorate, US Army Research Laboratory, Playa Vista, CA 90094-2536;
| | - Stacy Marsella
- College of Computer and Information Science, Northeastern University, Boston, MA 02115
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA 90094-2536
| |
Collapse
|
12
|
Chu VC, Lucas GM, Lei S, Mozgai S, Khooshabeh P, Gratch J. Emotion Regulation in the Prisoner's Dilemma: Effects of Reappraisal on Behavioral Measures and Cardiovascular Measures of Challenge and Threat. Front Hum Neurosci 2019; 13:50. [PMID: 30837855 PMCID: PMC6382736 DOI: 10.3389/fnhum.2019.00050] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Accepted: 01/30/2019] [Indexed: 11/19/2022] Open
Abstract
The current study examines cooperation and cardiovascular responses in individuals that were defected on by their opponent in the first round of an iterated Prisoner’s Dilemma. In this scenario, participants were either primed with the emotion regulation strategy of reappraisal or no emotion regulation strategy, and their opponent either expressed an amused smile or a polite smile after the results were presented. We found that cooperation behavior decreased in the no emotion regulation group when the opponent expressed an amused smile compared to a polite smile. In the cardiovascular measures, we found significant differences between the emotion regulation conditions using the biopsychosocial (BPS) model of challenge and threat. However, the cardiovascular measures of participants instructed with the reappraisal strategy were only weakly comparable with a threat state of the BPS model, which involves decreased blood flow and perception of greater task demands than resources to cope with those demands. Conversely, the cardiovascular measures of participants without an emotion regulation were only weakly comparable with a challenge state of the BPS model, which involves increased blood flow and perception of having enough or more resources to cope with task demands.
Collapse
Affiliation(s)
- Veronica C Chu
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Gale M Lucas
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, United States
| | - Su Lei
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, United States
| | - Sharon Mozgai
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, United States
| | | | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, United States
| |
Collapse
|
13
|
Knott BA, Gratch J, Cangelosi A, Caverlee J. ACM Transactions on Interactive Intelligent Systems (TiiS) Special Issue on Trust and Influence in Intelligent Human-Machine Interaction. ACM T INTERACT INTEL 2018. [DOI: 10.1145/3281451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
14
|
|
15
|
Lucas GM, Rizzo A, Gratch J, Scherer S, Stratou G, Boberg J, Morency LP. Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00051] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
16
|
Khashe S, Lucas G, Becerik-Gerber B, Gratch J. Buildings with persona: Towards effective building-occupant communication. Computers in Human Behavior 2017. [DOI: 10.1016/j.chb.2017.05.040] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Abstract
Affective computing (AC) adopts a computational approach to study affect. We highlight the AC approach towards automated affect measures that jointly model machine-readable physiological/behavioral signals with affect estimates as reported by humans or experimentally elicited. We describe the conceptual and computational foundations of the approach followed by two case studies: one on discrimination between genuine and faked expressions of pain in the lab, and the second on measuring nonbasic affect in the wild. We discuss applications of the measures, analyze measurement accuracy and generalizability, and highlight advances afforded by computational tipping points, such as big data, wearable sensing, crowdsourcing, and deep learning. We conclude by advocating for increasing synergies between AC and affective science and offer suggestions toward that direction.
Collapse
Affiliation(s)
- Sidney D’Mello
- Department of Computer Science, University of Notre Dame, USA
- Department of Psychology, University of Notre Dame, USA
| | - Arvid Kappas
- Department of Psychology, Jacobs University, Germany
| | - Jonathan Gratch
- Institute of Creative Technologies, University of Southern California, USA
- Computer Science Department, University of Southern California, USA
| |
Collapse
|
18
|
Rizzo A, Lucas G, Gratch J, Stratou G, Morency LP, Chavez K, Shilling R, Scherer S. Automatic Behavior Analysis During a Clinical Interview with a Virtual Human. Stud Health Technol Inform 2016; 220:316-322. [PMID: 27046598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
SimSensei is a Virtual Human (VH) interviewing platform that uses off-the-shelf sensors (i.e., webcams, Microsoft Kinect and a microphone) to capture and interpret real-time audiovisual behavioral signals from users interacting with the VH system. The system was specifically designed for clinical interviewing and health care support by providing a face-to-face interaction between a user and a VH that can automatically react to the inferred state of the user through analysis of behavioral signals gleaned from the user's facial expressions, body gestures and vocal parameters. Akin to how non-verbal behavioral signals have an impact on human-to-human interaction and communication, SimSensei aims to capture and infer user state from signals generated from user non-verbal communication to improve engagement between a VH and a user and to quantify user state from the data captured across a 20 minute interview. Results from of sample of service members (SMs) who were interviewed before and after a deployment to Afghanistan indicate that SMs reveal more PTSD symptoms to the VH than they report on the Post Deployment Health Assessment. Pre/Post deployment facial expression analysis indicated more sad expressions and few happy expressions at post deployment.
Collapse
Affiliation(s)
- Albert Rizzo
- University of Southern California, Institute for Creative Technologies
| | - Gale Lucas
- University of Southern California, Institute for Creative Technologies
| | - Jonathan Gratch
- University of Southern California, Institute for Creative Technologies
| | - Giota Stratou
- University of Southern California, Institute for Creative Technologies
| | | | | | | | - Stefan Scherer
- University of Southern California, Institute for Creative Technologies
| |
Collapse
|
19
|
|
20
|
|
21
|
Khooshabeh P, Dehghani M, Nazarian A, Gratch J. The cultural influence model: when accented natural language spoken by virtual characters matters. AI & Soc 2014. [DOI: 10.1007/s00146-014-0568-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
|
23
|
de Melo CM, Carnevale PJ, Read SJ, Gratch J. Reading people's minds from emotion expressions in interdependent decision making. J Pers Soc Psychol 2013; 106:73-88. [PMID: 24079297 DOI: 10.1037/a0034251] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How do people make inferences about other people's minds from their emotion displays? The ability to infer others' beliefs, desires, and intentions from their facial expressions should be especially important in interdependent decision making when people make decisions from beliefs about the others' intention to cooperate. Five experiments tested the general proposition that people follow principles of appraisal when making inferences from emotion displays, in context. Experiment 1 revealed that the same emotion display produced opposite effects depending on context: When the other was competitive, a smile on the other's face evoked a more negative response than when the other was cooperative. Experiment 2 revealed that the essential information from emotion displays was derived from appraisals (e.g., Is the current state of affairs conducive to my goals? Who is to blame for it?); facial displays of emotion had the same impact on people's decision making as textual expressions of the corresponding appraisals. Experiments 3, 4, and 5 used multiple mediation analyses and a causal-chain design: Results supported the proposition that beliefs about others' appraisals mediate the effects of emotion displays on expectations about others' intentions. We suggest a model based on appraisal theories of emotion that posits an inferential mechanism whereby people retrieve, from emotion expressions, information about others' appraisals, which then lead to inferences about others' mental states. This work has implications for the design of algorithms that drive agent behavior in human-agent strategic interaction, an emerging domain at the interface of computer science and social psychology.
Collapse
Affiliation(s)
- Celso M de Melo
- Marshall School of Business, University of Southern California
| | | | - Stephen J Read
- Department of Psychology, University of Southern California
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern California
| |
Collapse
|
24
|
Hartholt A, Traum D, Marsella SC, Shapiro A, Stratou G, Leuski A, Morency LP, Gratch J. All Together Now. Intelligent Virtual Agents 2013. [DOI: 10.1007/978-3-642-40415-3_33] [Citation(s) in RCA: 82] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
25
|
Gratch J, Morency LP, Scherer S, Stratou G, Boberg J, Koenig S, Adamson T, Rizzo A. User-state sensing for virtual health agents and telehealth applications. Stud Health Technol Inform 2013; 184:151-157. [PMID: 23400148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Nonverbal behaviors play a crucial role in shaping outcomes in face-to-face clinical interactions. Experienced clinicians use nonverbals to foster rapport and "read" their clients to inform diagnoses. The rise of telemedicine and virtual health agents creates new opportunities, but it also strips away much of this nonverbal channel. Recent advances in low-cost computer vision and sensing technologies have the potential to address this challenge by learning to recognize nonverbal cues from large datasets of clinical interactions. These techniques can enhance both telemedicine and the emerging technology of virtual health agents. This article describes our current research in addressing these challenges in the domain of PTSD and depression screening for U.S. Veterans. We describe our general approach and report on our initial contribution: the creation of a large dataset of clinical interview data that facilitates the training of user-state sensing technology.
Collapse
|
26
|
Affiliation(s)
- Jonathan Gratch
- Department of Computer Science, University of Southern California, USA
| |
Collapse
|
27
|
Abstract
Social causality is the inference an entity makes about the social behavior of other entities and self. Besides physical cause and effect, social causality involves reasoning about epistemic states of agents and coercive circumstances. Based on such inference, responsibility judgment is the process whereby one singles out individuals to assign responsibility, credit or blame for multi-agent activities. Social causality and responsibility judgment are a key aspect of social intelligence, and a model for them facilitates the design and development of a variety of multi-agent interactive systems. Based on psychological attribution theory, this paper presents a domain-independent computational model to automate social inference and judgment process according to an agents causal knowledge and observations of interaction. We conduct experimental studies to empirically validate the computational model. The experimental results show that our model predicts human judgments of social attributions and makes inferences consistent with what most people do in their judgments. Therefore, the proposed model can be generically incorporated into an intelligent system to augment its social and cognitive functionality.
Collapse
|
28
|
Kang SH, Gratch J. Socially anxious people reveal more personal information with virtual counselors that talk about themselves using intimate human back stories. Stud Health Technol Inform 2012; 181:202-206. [PMID: 22954856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we describe our findings from research designed to explore the effect of virtual human counselors' self-disclosure using intimate human back stories on real human clients' social responses in psychological counseling sessions. To investigate this subject, we designed an experiment involving two conditions of the counselors' self-disclosure: human back stories and computer back stories. We then measured socially anxious users' verbal self-disclosure. The results demonstrated that highly anxious users revealed personal information more than less anxious users when they interacted with virtual counselors who disclosed intimate information about themselves using human back stories. Furthermore, we found that greater inclination toward facilitated self-disclosure from highly anxious users following interaction with virtual counselors who employed human back stories rather than computer back stories. In addition, a further analysis of socially anxious users' feelings of rapport demonstrated that virtual counselors elicited more rapport with highly anxious users than less anxious users when interacting with counselors who employed human back stories. This outcome was not found in the users' interactions with counselors who employed computer back stories.
Collapse
Affiliation(s)
- Sin-Hwa Kang
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, USA.
| | | |
Collapse
|
29
|
Kulms P, Krämer NC, Gratch J, Kang SH. It’s in Their Eyes: A Study on Female and Male Virtual Humans’ Gaze. Intelligent Virtual Agents 2011. [DOI: 10.1007/978-3-642-23974-8_9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
30
|
|
31
|
Kang SH, Gratch J. People like virtual counselors that highly-disclose about themselves. Stud Health Technol Inform 2011; 167:143-148. [PMID: 21685657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, we describe our findings from research designed to explore the effect of self-disclosure between virtual human counselors (interviewers) and human users (interviewees) on users' social responses in counseling sessions. To investigate this subject, we designed an experiment involving three conditions of self-disclosure: high-disclosure, low-disclosure, and non-disclosure. We measured users' sense of co-presence and social attraction to virtual counselors. The results demonstrated that users reported more co-presence and social attraction to virtual humans who disclosed highly intimate information about themselves than when compared to other virtual humans who disclosed less intimate or no information about themselves. In addition, a further analysis of users' verbal self-disclosure showed that users revealed a medium level of personal information more often when interacting with virtual humans that highly-disclosed about themselves, than when interacting with virtual humans disclosing less intimate or no information about themselves.
Collapse
Affiliation(s)
- Sin-Hwa Kang
- Institute for Creative Technologies, University of Southern California, Playa Vista, CA, USA.
| | | |
Collapse
|
32
|
Abstract
The rapid development in new technologies and media and widespread access to the Internet is changing how people teach and learn. Recognizing the potential of technology, schools and universities are placing more content online from fully deliverable courses to course catalogs, course registration, and college admissions. People are able to gain access to a multitude of information with one click. Online learning environments range from authentic, real-time environments to simulations, as well as 2D and 3D virtual environments. This paper explores the use of a 2-dimensional, narrative-based, virtual learning environment (VLE) created by doctoral students to orient potential students to their university departments’ degree programs, faculty, and course offerings. After exploring the environment, participants were surveyed about their experiences. Findings include validation of the instrument, possible correlations relating to learning through games, engagement, and game design. Emerging themes and suggestions for future research are presented in this paper.
Collapse
|
33
|
von der Pütten AM, Krämer NC, Gratch J, Kang SH. “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Computers in Human Behavior 2010. [DOI: 10.1016/j.chb.2010.06.012] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
34
|
Huang L, Morency LP, Gratch J. Learning Backchannel Prediction Model from Parasocial Consensus Sampling: A Subjective Evaluation. Intelligent Virtual Agents 2010. [DOI: 10.1007/978-3-642-15892-6_17] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
35
|
Wang N, Johnson WL, Gratch J. Facial Expressions and Politeness Effect in Foreign Language Training System. Intelligent Tutoring Systems 2010. [DOI: 10.1007/978-3-642-13388-6_21] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
36
|
|
37
|
|
38
|
|
39
|
|
40
|
|
41
|
|
42
|
Kenny P, Parsons TD, Gratch J, Leuski A, Rizzo AA. Virtual Patients for Clinical Therapist Skills Training. Intelligent Virtual Agents 2007. [DOI: 10.1007/978-3-540-74997-4_19] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
43
|
Jonsdottir GR, Gratch J, Fast E, Thórisson KR. Fluid Semantic Back-Channel Feedback in Dialogue: Challenges and Progress. Intelligent Virtual Agents 2007. [DOI: 10.1007/978-3-540-74997-4_15] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
44
|
|
45
|
|
46
|
|
47
|
Abstract
Although most scheduling problems are NP-hard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problem-solving solving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.
Collapse
|
48
|
DeJong GF, Gratch J. Learning search control knowledge: An explanation-based approach. ARTIF INTELL 1991. [DOI: 10.1016/0004-3702(91)90093-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|