1
|
Tanaka T, Haruno M. Feedback from an avatar facilitates risk-taking by modulating the amygdala response to feedback uncertainty. PLoS Biol 2025; 23:e3003122. [PMID: 40265226 PMCID: PMC12015647 DOI: 10.1371/journal.pbio.3003122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2025] [Accepted: 03/18/2025] [Indexed: 04/24/2025] Open
Abstract
With the rise of cyberspace technologies, communication through avatars has become increasingly common. However, the cognitive and neural mechanisms underlying behavioral changes induced by avatar interactions remain poorly understood, particularly when avatars serve as communication partners. To address this gap and uncover the biological mechanisms involved, we conducted behavioral (n = 28) and functional magnetic resonance imaging (fMRI) (n = 51) experiments using a simple gambling task. Participants received dynamic facial-expression feedback from either a human observer presented as an avatar or a real human face based on the outcome (win or no-win) of each gambling trial. Our results showed that expecting avatar feedback significantly increased gambling behavior in both behavioral and fMRI settings. Computational modeling revealed that differences in risk-taking behavior between the avatar and human conditions were associated with differential valuation of feedback uncertainty. Furthermore, we found that the amygdala encodes the differential valuation of feedback uncertainty, where a negative response to feedback uncertainty played a key role in choosing a gambling option. Additionally, we found that individual differences in behavioral and neural valuation of feedback uncertainty correlate with the questionnaire score that measures emotional consideration of another person's internal states. These results demonstrate the facilitation of risk-taking behavior by avatar feedback and its underlying cognitive and neural mechanisms, thus providing deeper biological insights into risk-taking behavior and implications for human social interactions using avatars.
Collapse
Affiliation(s)
- Toshiko Tanaka
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Japan
| | - Masahiko Haruno
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Japan
| |
Collapse
|
2
|
Li BJ, Zhang H. Exploring the links between type and content of virtual background use during videoconferencing and videoconference fatigue. Front Psychol 2024; 15:1408481. [PMID: 39364086 PMCID: PMC11446745 DOI: 10.3389/fpsyg.2024.1408481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 07/17/2024] [Indexed: 10/05/2024] Open
Abstract
The popularity of remote working in recent years has led to a rise in the use of videoconferencing tools. However, these communication tools have also given rise to a phenomenon known as videoconference fatigue (VF). Using the limited capacity model of motivated mediated message processing and impression management theory as the theoretical framework, this study explores how different types and content of virtual backgrounds in videoconferencing influence people's VF and well-being. A survey of 610 users of videoconferencing tools revealed significant variations in the content and type of virtual backgrounds used during videoconferences. Our findings highlight three main points: first, there is a significant relationship between the use of virtual backgrounds and VF; second, pairwise comparisons showed that the type of virtual background significantly influences the amount of VF experienced by users; third, the content of virtual backgrounds also significantly impacts the level of VF experienced by users. These results suggest that careful selection of virtual backgrounds can mitigate VF and improve user well-being. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Benjamin J Li
- Wee Kim Wee School of Communication and Information, College of Humanities, Arts, and Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Heng Zhang
- Wee Kim Wee School of Communication and Information, College of Humanities, Arts, and Social Sciences, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
3
|
Kiuchi K, Umehara H, Irizawa K, Kang X, Nakataki M, Yoshida M, Numata S, Matsumoto K. An Exploratory Study of the Potential of Online Counseling for University Students by a Human-Operated Avatar Counselor. Healthcare (Basel) 2024; 12:1287. [PMID: 38998822 PMCID: PMC11241672 DOI: 10.3390/healthcare12131287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 06/19/2024] [Accepted: 06/24/2024] [Indexed: 07/14/2024] Open
Abstract
Recently, the use of digital technologies, such as avatars and virtual reality, has been increasingly explored to address university students' mental health issues. However, there is limited research on the advantages and disadvantages of counselors using avatars in online video counseling. Herein, 25 university students were enrolled in a pilot online counseling session with a human counselor-controlled avatar, and asked about their emotional experiences and impressions of the avatar and to provide qualitative feedback on their communication experience. Positive emotions during the session were associated with impressions of the avatar's intelligence and likeability. The anthropomorphism, animacy, likeability, and intelligent impressions of the avatar were interrelated, indicating that the avatar's smile and the counselor's expertise in empathy and approval may have contributed to these impressions. However, no associations were observed between participant experiences and their prior communication with avatars, or between participant experiences and their gender or the perceived gender of the avatar. Accordingly, recommendations for future practice and research are provided. Accumulating practical and empirical findings on the effectiveness of human-operated avatar counselors is crucial for addressing university students' mental health issues.
Collapse
Affiliation(s)
- Keita Kiuchi
- Japan National Institute of Occupational Safety and Health, Japan Organization of Occupational Health and Safety, Kawasaki 214-8585, Japan
| | - Hidehiro Umehara
- Graduate School of Biomedical Sciences, Department of Psychiatry, Tokushima University, Tokushima 770-0042, Japan
| | - Koushi Irizawa
- Graduate School of Biomedical Sciences, Department of Psychiatry, Tokushima University, Tokushima 770-0042, Japan
| | - Xin Kang
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
| | - Masahito Nakataki
- Graduate School of Biomedical Sciences, Department of Psychiatry, Tokushima University, Tokushima 770-0042, Japan
| | - Minoru Yoshida
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
| | - Shusuke Numata
- Graduate School of Biomedical Sciences, Department of Psychiatry, Tokushima University, Tokushima 770-0042, Japan
| | - Kazuyuki Matsumoto
- Graduate School of Technology, Industrial and Social Sciences, Tokushima University, Tokushima 770-8506, Japan
| |
Collapse
|
4
|
Giovannelli A, Thomas J, Lane L, Rodrigues F, Bowman DA. Gestures vs. Emojis: Comparing Non-Verbal Reaction Visualizations for Immersive Collaboration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4772-4781. [PMID: 37782597 DOI: 10.1109/tvcg.2023.3320254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Collaborative virtual environments afford new capabilities in telepresence applications, allowing participants to co-inhabit an environment to interact while being embodied via avatars. However, shared content within these environments often takes away the attention of collaborators from observing the non-verbal cues conveyed by their peers, resulting in less effective communication. Exaggerated gestures, abstract visuals, as well as a combination of the two, have the potential to improve the effectiveness of communication within these environments in comparison to familiar, natural non-verbal visualizations. We designed and conducted a user study where we evaluated the impact of these different non-verbal visualizations on users' identification time, understanding, and perception. We found that exaggerated gestures generally perform better than non-exaggerated gestures, abstract visuals are an effective means to convey intentional reactions, and the combination of gestures with abstract visuals provides some benefits compared to their standalone counterparts.
Collapse
|
5
|
Wei S, Freeman D, Rovira A. A randomised controlled test of emotional attributes of a virtual coach within a virtual reality (VR) mental health treatment. Sci Rep 2023; 13:11517. [PMID: 37460586 PMCID: PMC10352334 DOI: 10.1038/s41598-023-38499-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 07/10/2023] [Indexed: 07/20/2023] Open
Abstract
We set out to test whether positive non-verbal behaviours of a virtual coach can enhance people's engagement in automated virtual reality therapy. 120 individuals scoring highly for fear of heights participated. In a two-by-two factor, between-groups, randomised design, participants met a virtual coach that varied in warmth of facial expression (with/without) and affirmative nods (with/without). The virtual coach provided a consultation about treating fear of heights. Participants rated the therapeutic alliance, treatment credibility, and treatment expectancy. Both warm facial expressions (group difference = 7.44 [3.25, 11.62], p = 0.001, [Formula: see text]=0.10) and affirmative nods (group difference = 4.36 [0.21, 8.58], p = 0.040, [Formula: see text] = 0.04) by the virtual coach independently increased therapeutic alliance. Affirmative nods increased the treatment credibility (group difference = 1.76 [0.34, 3.11], p = 0.015, [Formula: see text] = 0.05) and expectancy (group difference = 2.28 [0.45, 4.12], p = 0.015, [Formula: see text] = 0.05) but warm facial expressions did not increase treatment credibility (group difference = 0.64 [- 0.75, 2.02], p = 0.363, [Formula: see text] = 0.01) or expectancy (group difference = 0.36 [- 1.48, 2.20], p = 0.700, [Formula: see text] = 0.001). There were no significant interactions between head nods and facial expressions in the occurrence of therapeutic alliance (p = 0.403, [Formula: see text] = 0.01), credibility (p = 0.072, [Formula: see text] = 0.03), or expectancy (p = 0.275, [Formula: see text] = 0.01). Our results demonstrate that in the development of automated VR therapies there is likely to be therapeutic value in detailed consideration of the animations of virtual coaches.
Collapse
Affiliation(s)
- Shu Wei
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, OX3 7JX, UK.
| | - Daniel Freeman
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Oxford Health NHS Foundation Trust, Oxford, UK
| | - Aitor Rovira
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Oxford Health NHS Foundation Trust, Oxford, UK
| |
Collapse
|
6
|
Esposito A, Amorese T, Cuciniello M, Esposito AM, Cordasco G. Do you like me? Behavioral and physical features for socially and emotionally engaging interactive systems. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1138501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
With the aim to give an overview of the most recent discoveries in the field of socially engaging interactive systems, the present paper discusses features affecting users' acceptance of virtual agents, robots, and chatbots. In addition, questionnaires exploited in several investigations to assess the acceptance of virtual agents, robots, and chatbots (voice only) are discussed and reported in the Supplementary material to make them available to the scientific community. These questionnaires were developed by the authors as a scientific contribution to the H2020 project EMPATHIC (http://www.empathic-project.eu/), Menhir (https://menhir-project.eu/), and the Italian-funded projects SIROBOTICS (https://www.exprivia.it/it-tile-6009-si-robotics/) and ANDROIDS (https://www.psicologia.unicampania.it/android-project) to guide the design and implementation of the promised assistive interactive dialog systems. They aimed to quantitatively evaluate Virtual Agents Acceptance (VAAQ), Robot Acceptance (RAQ), and Synthetic Virtual Agent Voice Acceptance (VAVAQ).
Collapse
|
7
|
Zhang A, Patrick Rau PL. Tools or peers? Impacts of anthropomorphism level and social role on emotional attachment and disclosure tendency towards intelligent agents. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
8
|
Study of the characteristics of ear animations used to convey information and emotion in remote communication without web camera. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022; 8:100239. [PMID: 36267806 PMCID: PMC9556881 DOI: 10.1016/j.chbr.2022.100239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 11/11/2022] Open
Abstract
The use of remote communication has grown globally due to the COVID-19 outbreak. In some remote communication, meeting participants use audio only with their web cameras turned off, resulting in a lack of exchange of nonverbal information. In this study, we defined an "ear animation" as an avatar composed of a simple face-like body with no facial features and ear-like parts coming out from this body which can be animated. The purpose of this study was to design the ear animation and evaluate user impressions of it as nonverbal information. While setting conveying information and conveying emotion as dependent variables, the independent variables we set in this study were three different conditions: when ear animations were presented with no sound, when ear animations were presented simultaneously with simple voice, when only voice was played, and three different kinds of content: "agreement", "skepticism", and "disagreement" conveyed from ear animations. Using Two-way ANOVA (repeated) with these variables, we conducted comparative analysis. The results showed that ear animations presented simultaneously with voice had the potential to be a new way of conveying nonverbal information by combining relevant ear animation movement forms.
Collapse
|
9
|
Park M, Suk HJ. The characteristics of facial emotions expressed in Memojis. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
10
|
Lobbestael J, Cima MJ. Virtual Reality for Aggression Assessment: The Development and Preliminary Results of Two Virtual Reality Tasks to Assess Reactive and Proactive Aggression in Males. Brain Sci 2021; 11:1653. [PMID: 34942955 PMCID: PMC8699434 DOI: 10.3390/brainsci11121653] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/10/2021] [Accepted: 12/14/2021] [Indexed: 11/16/2022] Open
Abstract
Validly measuring aggression is challenging because self-reports are plagued with biased answer tendencies and behavioral measures with ethical concerns and low ecological validity. The current study, therefore, introduces a novel virtual reality (VR) aggression assessment tool, differentially assessing reactive and proactive aggression. Two VR tasks were developed, one in an alley environment (N = 24, all male, Mage = 23.88, 83.3% students) and an improved second one in a bar (N = 50, all male, Mage = 22.54, 90% students). In this bar VR task, participants were randomly assigned to either the reactive condition where they were triggered by a cheating and insulting dart-player or to the proactive condition where they could earn extra money by aggressing. Participants' level of self-reported aggression and psychopathy was assessed, after which they engaged in either the reactive or proactive VR task. Changes in affect and blood pressure were also measured. Aggression in the reactive VR task was evidenced to mostly display convergent validity because it positively correlated with self-reported aggression and total and fearless dominance factor scores of psychopathy, and there was a trend relationship with increased systolic blood pressure. The validity of the proactive aggression variant of our VR bar paradigm received less support, and needs more refinement. It can be concluded that VR is a potentially promising tool to experimentally induce and assess (reactive) aggression, which has the potential to provide aggression researchers and clinicians with a realistic and modifiable aggression assessment environment.
Collapse
Affiliation(s)
- Jill Lobbestael
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, 6211 Maastricht, The Netherlands
| | - Maaike J. Cima
- Department Developmental Psychopathology, Brain Science Institute, Radboud University, 6525 Nijmegen, The Netherlands;
| |
Collapse
|
11
|
Entrepreneurship and Innovation Events during the COVID-19 Pandemic: The User Preferences of VirBELA Virtual 3D Platform at the SHIFT Event Organized in Finland. SUSTAINABILITY 2021. [DOI: 10.3390/su13073802] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The COVID-19 pandemic brought abrupt changes for international events that promote entrepreneurship and innovation. Usually, such events bring together thousands of participants to provide them with information about ongoing and emerging trends in their fields, to network with old and new colleagues and get ideas that can develop into innovations. In 2020, most such events were cancelled. Few events were organized virtually, that is without participants physically coming together. Compared with physical face-to-face events, virtual events reduce the travel-related emissions and consumption, thereby supporting sustainability. This article studies the SHIFT entrepreneurship and innovation event held virtually in October 2020 and organized in Finland. For this article, the author gathered data about user preferences from surveying participants, speakers, presenters and organizers, almost all of whom were first-time users of VirBELA’s 3D virtual platform. Furthermore, participant observation and interviews via avatars were conducted during the event. At the virtual event, 68% of respondents talked with former acquaintances, and 68% also talked with new acquaintances, and 53% opinioned that using the virtual platform can support the emergence of innovations. Virtual entrepreneurship and innovation events have potential to support networking, novel ideas and thus innovations, but issues of trust and confidentiality arose concerns among some participants.
Collapse
|
12
|
Hancock JT, Bailenson JN. The Social Impact of Deepfakes. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2021; 24:149-152. [PMID: 33760669 DOI: 10.1089/cyber.2021.29208.jth] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
13
|
Oh Kruzic C, Kruzic D, Herrera F, Bailenson J. Facial expressions contribute more than body movements to conversational outcomes in avatar-mediated virtual environments. Sci Rep 2020; 10:20626. [PMID: 33244081 PMCID: PMC7692542 DOI: 10.1038/s41598-020-76672-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 10/23/2020] [Indexed: 12/21/2022] Open
Abstract
This study focuses on the individual and joint contributions of two nonverbal channels (i.e., face and upper body) in avatar mediated-virtual environments. 140 dyads were randomly assigned to communicate with each other via platforms that differentially activated or deactivated facial and bodily nonverbal cues. The availability of facial expressions had a positive effect on interpersonal outcomes. More specifically, dyads that were able to see their partner’s facial movements mapped onto their avatars liked each other more, formed more accurate impressions about their partners, and described their interaction experiences more positively compared to those unable to see facial movements. However, the latter was only true when their partner’s bodily gestures were also available and not when only facial movements were available. Dyads showed greater nonverbal synchrony when they could see their partner’s bodily and facial movements. This study also employed machine learning to explore whether nonverbal cues could predict interpersonal attraction. These classifiers predicted high and low interpersonal attraction at an accuracy rate of 65%. These findings highlight the relative significance of facial cues compared to bodily cues on interpersonal outcomes in virtual environments and lend insight into the potential of automatically tracked nonverbal cues to predict interpersonal attitudes.
Collapse
Affiliation(s)
- Catherine Oh Kruzic
- Virtual Human Interaction Lab, Department of Communication, Stanford University, 450 Serra Mall, Stanford, CA, 94305, USA.
| | - David Kruzic
- Virtual Human Interaction Lab, Department of Communication, Stanford University, 450 Serra Mall, Stanford, CA, 94305, USA
| | - Fernanda Herrera
- Virtual Human Interaction Lab, Department of Communication, Stanford University, 450 Serra Mall, Stanford, CA, 94305, USA
| | - Jeremy Bailenson
- Virtual Human Interaction Lab, Department of Communication, Stanford University, 450 Serra Mall, Stanford, CA, 94305, USA
| |
Collapse
|
14
|
Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. ROBOTICS 2020. [DOI: 10.3390/robotics9040092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Life-like humanoid robots are on the rise, aiming at communicative purposes that resemble humanlike conversation. In human social interaction, the facial expression serves important communicative functions. We examined whether a robot’s face is similarly important in human-robot communication. Based on emotion research and neuropsychological insights on the parallel processing of emotions, we argue that greater plasticity in the robot’s face elicits higher affective responsivity, more closely resembling human-to-human responsiveness than a more static face. We conducted a between-subjects experiment of 3 (facial plasticity: human vs. facially flexible robot vs. facially static robot) × 2 (treatment: affectionate vs. maltreated). Participants (N = 265; Mage = 31.5) were measured for their emotional responsiveness, empathy, and attribution of feelings to the robot. Results showed empathically and emotionally less intensive responsivity toward the robots than toward the human but followed similar patterns. Significantly different intensities of feelings and attributions (e.g., pain upon maltreatment) followed facial articulacy. Theoretical implications for underlying processes in human-robot communication are discussed. We theorize that precedence of emotion and affect over cognitive reflection, which are processed in parallel, triggers the experience of ‘because I feel, I believe it’s real,’ despite being aware of communicating with a robot. By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and appears not relevant anymore.
Collapse
|
15
|
Wikström V, Martikainen S, Falcon M, Ruistola J, Saarikivi K. Collaborative block design task for assessing pair performance in virtual reality and reality. Heliyon 2020; 6:e04823. [PMID: 32984580 PMCID: PMC7494474 DOI: 10.1016/j.heliyon.2020.e04823] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 02/03/2020] [Accepted: 08/26/2020] [Indexed: 11/18/2022] Open
Abstract
Collaborative problem solving is more important than ever as the problems we try to solve become increasingly complex. Meanwhile, personal and professional communication has moved from face-to-face to computer-mediated environments, but there is little understanding on how the characteristics of these environments affect the quality of interaction and joint problem solving. To develop this understanding, methods are needed for measuring success of collaboration. For this purpose, we created a collaborative block design task intended to evaluate and quantify pair performance. In this task, participants need to share information to complete visuospatial puzzles. Two versions of the task are described: a physical version and one that can be completed in virtual reality. A preliminary study was conducted with the physical version (N = 18 pairs) and the results were used to develop the task for a second study in virtual reality (N = 31 pairs). Performance measures were developed for the task, and we found that pair performance was normally distributed and positively associated with visuospatial skills, but not with other participant-specific background factors. The task specifications are released for the research community to apply and adapt in the study of computer-mediated social interaction.
Collapse
Affiliation(s)
- Valtteri Wikström
- Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Silja Martikainen
- Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Mari Falcon
- Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Juha Ruistola
- Glue Collaboration, Fake Production Ltd, Helsinki, Finland
| | - Katri Saarikivi
- Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| |
Collapse
|
16
|
Moser I, Chiquet S, Strahm SK, Mast FW, Bergamin P. Group Decision-Making in Multi-User Immersive Virtual Reality. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2020; 23:846-853. [PMID: 32856952 PMCID: PMC7757615 DOI: 10.1089/cyber.2020.0065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Head-mounted displays enable social interactions in immersive virtual environments. However, it is yet unclear whether the technology is also suitable for collaborative work between remote group members. Previous research comparing group performance in nonimmersive computer-mediated communication and face-to-face (FtF) interaction yielded inconsistent results. For this reason, we set out to compare multi-user immersive virtual reality (IVR), video conferencing (VC), and FtF interaction in a group decision task. Furthermore, we examined whether the conditions differed with respect to cognitive load and social presence. Using the hidden profile paradigm, we tested 174 participants in a fictional personnel selection case. Discussion quality in IVR did not differ from VC and FtF interaction. All conditions showed the typical bias for discussing information that was provided for all participants (i.e., shared information) compared with information that was only disclosed to individual participants (i.e., unshared information). Furthermore, we found that IVR groups showed the same probability of solving the task correctly. Social presence in IVR was reduced compared with FtF interaction; however, we found no differences in cognitive load. In sum, our results imply that IVR can effectuate efficient group behavior in a modern working environment that is characterized by a growing demand for remote collaboration.
Collapse
Affiliation(s)
- Ivan Moser
- Institute for Research in Open, Distance and eLearning, Swiss Distance University of Applied Sciences, Brig, Switzerland
| | - Sandra Chiquet
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Sebastian K Strahm
- Department of Psychology, University of Bern, Bern, Switzerland.,Faculty of Psychology, Swiss Distance University, Brig, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Per Bergamin
- Institute for Research in Open, Distance and eLearning, Swiss Distance University of Applied Sciences, Brig, Switzerland
| |
Collapse
|
17
|
Gonzalez-Franco M, Steed A, Hoogendyk S, Ofek E. Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2023-2029. [PMID: 32070973 DOI: 10.1109/tvcg.2020.2973075] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one's own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement lip-sync motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.
Collapse
|
18
|
Corwin AI, Erickson-Davis C. Experiencing presence. HAU: JOURNAL OF ETHNOGRAPHIC THEORY 2020. [DOI: 10.1086/708542] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
19
|
Herrera F, Bailenson J, Weisz E, Ogle E, Zaki J. Building long-term empathy: A large-scale comparison of traditional and virtual reality perspective-taking. PLoS One 2018; 13:e0204494. [PMID: 30332407 PMCID: PMC6192572 DOI: 10.1371/journal.pone.0204494] [Citation(s) in RCA: 123] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 09/10/2018] [Indexed: 01/09/2023] Open
Abstract
Virtual Reality (VR) has been increasingly referred to as the “ultimate empathy machine” since it allows users to experience any situation from any point of view. However, empirical evidence supporting the claim that VR is a more effective method of eliciting empathy than traditional perspective-taking is limited. Two experiments were conducted in order to compare the short and long-term effects of a traditional perspective-taking task and a VR perspective-taking task (Study 1), and to explore the role of technological immersion when it comes to different types of mediated perspective-taking tasks (Study 2). Results of Study 1 show that over the course of eight weeks participants in both conditions reported feeling empathetic and connected to the homeless at similar rates, however, participants who became homeless in VR had more positive, longer-lasting attitudes toward the homeless and signed a petition supporting the homeless at a significantly higher rate than participants who performed a traditional perspective-taking task. Study 2 compared three different types of perspective-taking tasks with different levels of immersion (traditional vs. desktop computer vs. VR) and a control condition (where participants received fact-driven information about the homeless). Results show that participants who performed any type of perspective-taking task reported feeling more empathetic and connected to the homeless than the participants who only received information. Replicating the results from Study 1, there was no difference in self-report measures for any of the perspective-taking conditions, however, a significantly higher number of participants in the VR condition signed a petition supporting affordable housing for the homeless compared to the traditional and less immersive conditions. We discuss the theoretical and practical implications of these findings.
Collapse
Affiliation(s)
- Fernanda Herrera
- Department of Communication, Stanford University, Stanford, California, United States of America
- * E-mail:
| | - Jeremy Bailenson
- Department of Communication, Stanford University, Stanford, California, United States of America
| | - Erika Weisz
- Department of Psychology, Stanford University, Stanford, California, United States of America
| | - Elise Ogle
- Department of Communication, Stanford University, Stanford, California, United States of America
| | - Jamil Zaki
- Department of Psychology, Stanford University, Stanford, California, United States of America
| |
Collapse
|
20
|
Oh CS, Bailenson JN, Welch GF. A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Front Robot AI 2018; 5:114. [PMID: 33500993 PMCID: PMC7805699 DOI: 10.3389/frobt.2018.00114] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 09/11/2018] [Indexed: 11/14/2022] Open
Abstract
Social presence, or the feeling of being there with a "real" person, is a crucial component of interactions that take place in virtual reality. This paper reviews the concept, antecedents, and implications of social presence, with a focus on the literature regarding the predictors of social presence. The article begins by exploring the concept of social presence, distinguishing it from two other dimensions of presence-telepresence and self-presence. After establishing the definition of social presence, the article offers a systematic review of 233 separate findings identified from 152 studies that investigate the factors (i.e., immersive qualities, contextual differences, and individual psychological traits) that predict social presence. Finally, the paper discusses the implications of heightened social presence and when it does and does not enhance one's experience in a virtual environment.
Collapse
Affiliation(s)
- Catherine S. Oh
- Virtual Human Interaction Lab, Department of Communication, Stanford University, Stanford, CA, United States
| | - Jeremy N. Bailenson
- Virtual Human Interaction Lab, Department of Communication, Stanford University, Stanford, CA, United States
| | - Gregory F. Welch
- College of Nursing, Department of Computer Science, Institute for Simulation & Training (Synthetic Reality Lab), University of Central Florida, Orlando, FL, United States
| |
Collapse
|
21
|
Plusquellec P, Denault V. The 1000 Most Cited Papers on Visible Nonverbal Behavior: A Bibliometric Analysis. JOURNAL OF NONVERBAL BEHAVIOR 2018. [DOI: 10.1007/s10919-018-0280-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|