1
|
Shu L, Barradas VR, Qin Z, Koike Y. Facial expression recognition through muscle synergies and estimation of facial keypoint displacements through a skin-musculoskeletal model using facial sEMG signals. Front Bioeng Biotechnol 2025; 13:1490919. [PMID: 40013307 PMCID: PMC11861201 DOI: 10.3389/fbioe.2025.1490919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 01/20/2025] [Indexed: 02/28/2025] Open
Abstract
The development of facial expression recognition (FER) and facial expression generation (FEG) systems is essential to enhance human-robot interactions (HRI). The facial action coding system is widely used in FER and FEG tasks, as it offers a framework to relate the action of facial muscles and the resulting facial motions to the execution of facial expressions. However, most FER and FEG studies are based on measuring and analyzing facial motions, leaving the facial muscle component relatively unexplored. This study introduces a novel framework using surface electromyography (sEMG) signals from facial muscles to recognize facial expressions and estimate the displacement of facial keypoints during the execution of the expressions. For the facial expression recognition task, we studied the coordination patterns of seven muscles, expressed as three muscle synergies extracted through non-negative matrix factorization, during the execution of six basic facial expressions. Muscle synergies are groups of muscles that show coordinated patterns of activity, as measured by their sEMG signals, and are hypothesized to form the building blocks of human motor control. We then trained two classifiers for the facial expressions based on extracted features from the sEMG signals and the synergy activation coefficients of the extracted muscle synergies, respectively. The accuracy of both classifiers outperformed other systems that use sEMG to classify facial expressions, although the synergy-based classifier performed marginally worse than the sEMG-based one (classification accuracy: synergy-based 97.4%, sEMG-based 99.2%). However, the extracted muscle synergies revealed common coordination patterns between different facial expressions, allowing a low-dimensional quantitative visualization of the muscle control strategies involved in human facial expression generation. We also developed a skin-musculoskeletal model enhanced by linear regression (SMSM-LRM) to estimate the displacement of facial keypoints during the execution of a facial expression based on sEMG signals. Our proposed approach achieved a relatively high fidelity in estimating these displacements (NRMSE 0.067). We propose that the identified muscle synergies could be used in combination with the SMSM-LRM model to generate motor commands and trajectories for desired facial displacements, potentially enabling the generation of more natural facial expressions in social robotics and virtual reality.
Collapse
Affiliation(s)
- Lun Shu
- Department of Information and Communications Engineering, Institute of Science Tokyo, Yokohama, Japan
| | - Victor R. Barradas
- Institute of Integrated Research, Institute of Science Tokyo, Yokohama, Japan
| | - Zixuan Qin
- Department of Information and Communications Engineering, Institute of Science Tokyo, Yokohama, Japan
| | - Yasuharu Koike
- Institute of Integrated Research, Institute of Science Tokyo, Yokohama, Japan
| |
Collapse
|
2
|
Fielder ML, Wolf E, Dollinger N, Mal D, Botsch M, Latoschik ME, Wienrich C. From Avatars to Agents: Self-Related Cues Through Embodiment and Personalization Affect Body Perception in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7386-7396. [PMID: 39269805 DOI: 10.1109/tvcg.2024.3456211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a $2\times 2$ mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.
Collapse
|
3
|
Wu L, Chen KB. Gender Swap in Virtual Reality for Supporting Inclusion and Implications in the Workplace. IISE Trans Occup Ergon Hum Factors 2024:1-13. [PMID: 39470378 DOI: 10.1080/24725838.2024.2419130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 10/15/2024] [Accepted: 10/16/2024] [Indexed: 10/30/2024]
Abstract
OCCUPATIONAL APPLICATIONSWe explored the potential impacts of a virtual gender swap on perceptions toward sexual harassment, which is a harmful behavior that lacks respect and inclusivity. Given that perceptions of harassing behaviors can vary, and that gender may influence one's interpretation of such behaviors, we implemented gender swap in virtual reality (VR) to examine changes in sensitivity to harassment across genders. Participants reported harassing behaviors as more inappropriate when embodying female avatars, regardless of their own gender. Our results suggest that gender swap in VR may raise awareness and narrow the gender gap in harassment perceptions, showing potential of VR-based interventions for immersive workplace training to effectively address biases and promote inclusivity among diversity, equity, and inclusion training. Our study also shows the potential of VR to simulate diverse scenarios and perspectives for tailored training experiences that cater to the specific needs and challenges of different occupational settings.
Collapse
Affiliation(s)
- Linfeng Wu
- Department of Manufacturing and Industrial Engineering, University of Texas Rio Grande Valley, Edinburg, TX, USA
| | - Karen B Chen
- Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|
4
|
Miura S, Fukumoto R, Okamura N, Fujie MG, Sugano S. Visual Illusion Created by a Striped Pattern Through Augmented Reality for the Prevention of Tumbling on Stairs. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5466-5477. [PMID: 37450363 DOI: 10.1109/tvcg.2023.3295425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
A fall on stairs can be a dangerous accident. An important indicator of falling risk is the foot clearance, which is the height of the foot when ascending stairs or the distance of the foot from the step when descending. We developed an augmented reality system with a holographic lens using a visual illusion to improve the foot clearance on stairs. The system draws a vertical striped pattern on the stair riser as the participant ascends the stairs to create the illusion that the steps are higher than the actual steps, and draws a horizontal striped pattern on the stair tread as the participant descends the stairs to create the illusion of narrower stairs. We experimentally evaluated the accuracy of the system and fitted a model to determine the appropriate stripe thickness. Finally, participants ascended and descended stairs before, during, and after using the augmented reality system. The foot clearance significantly improved, not only while the participants used the system but also after they used the system compared with before.
Collapse
|
5
|
Park JH, Lee SH, Lee SW. Towards EEG-based Talking-face Generation for Brain Signal-driven Dynamic Communication. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40039782 DOI: 10.1109/embc53108.2024.10781922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Research on decoding speech or generating images from human brain activity holds intriguing potential as neuroprosthesis for patients and innovative communication tools for general users. However, previous studies have been constrained in generating fragmented or abstract outputs, rendering them less applicable for serving as an alternative form of communication. In this paper, we propose an integrated framework that synthesizes speech from non-invasive speech-related brain signals and generates a talking-face that performs "lip-sync" using intermediate input decoded from brain signals. For realistic and dynamic brain signal-mediated communication, we generated a personalized talking-face by utilizing various forms of target data such as a real face or an avatar. Additionally, we performed a denoising process to enhance the quality of synthesized voices from brain signals, and to minimize unnecessary facial movements according to the noise. Therefore, clear and natural talking-faces, applicable to both real faces and avatars, could be generated from noisy brain signals, enabling dynamic communication. These findings serve as a pivotal contribution to the advancement of brain signal-driven face-to-face communication through the provision of integrated speech and visual interfaces. This represents a significant step towards the development of a more intuitive and dynamic brain-computer interface communication system.
Collapse
|
6
|
Kasahara S, Kumasaki N, Shimizu K. Investigating the impact of motion visual synchrony on self face recognition using real time morphing. Sci Rep 2024; 14:13090. [PMID: 38849381 PMCID: PMC11161490 DOI: 10.1038/s41598-024-63233-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/27/2024] [Indexed: 06/09/2024] Open
Abstract
Face recognition is a crucial aspect of self-image and social interactions. Previous studies have focused on static images to explore the boundary of self-face recognition. Our research, however, investigates the dynamics of face recognition in contexts involving motor-visual synchrony. We first validated our morphing face metrics for self-face recognition. We then conducted an experiment using state-of-the-art video processing techniques for real-time face identity morphing during facial movement. We examined self-face recognition boundaries under three conditions: synchronous, asynchronous, and static facial movements. Our findings revealed that participants recognized a narrower self-face boundary with moving facial images compared to static ones, with no significant differences between synchronous and asynchronous movements. The direction of morphing consistently biased the recognized self-face boundary. These results suggest that while motor information of the face is vital for self-face recognition, it does not rely on movement synchronization, and the sense of agency over facial movements does not affect facial identity judgment. Our methodology offers a new approach to exploring the 'self-face boundary in action', allowing for an independent examination of motion and identity.
Collapse
Affiliation(s)
- Shunichi Kasahara
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan.
- Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0412, Japan.
| | - Nanako Kumasaki
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| | - Kye Shimizu
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| |
Collapse
|
7
|
Do TD, Protko CI, McMahan RP. Stepping into the Right Shoes: The Effects of User-Matched Avatar Ethnicity and Gender on Sense of Embodiment in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2434-2443. [PMID: 38437125 DOI: 10.1109/tvcg.2024.3372067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
In many consumer virtual reality (VR) applications, users embody predefined characters that offer minimal customization options, frequently emphasizing storytelling over user choice. We explore whether matching a user's physical characteristics, specifically ethnicity and gender, with their virtual self-avatar affects their sense of embodiment in VR. We conducted a $2\times 2$ within-subjects experiment ($\mathrm{n}=32$) with a diverse user population to explore the impact of matching or not matching a user's self-avatar to their ethnicity and gender on their sense of embodiment. Our results indicate that matching the ethnicity of the user and their self-avatar significantly enhances sense of embodiment regardless of gender, extending across various aspects, including appearance, response, and ownership. We also found that matching gender significantly enhanced ownership, suggesting that this aspect is influenced by matching both ethnicity and gender. Interestingly, we found that matching ethnicity specifically affects self-location while matching gender specifically affects one's body ownership.
Collapse
|
8
|
Combe T, Fribourg R, Detto L, Norm JM. Exploring the Influence of Virtual Avatar Heads in Mixed Reality on Social Presence, Performance and User Experience in Collaborative Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2206-2216. [PMID: 38437082 DOI: 10.1109/tvcg.2024.3372051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
In Mixed Reality (MR), users' heads are largely (if not completely) occluded by the MR Head-Mounted Display (HMD) they are wearing. As a consequence, one cannot see their facial expressions and other communication cues when interacting locally. In this paper, we investigate how displaying virtual avatars' heads on-top of the (HMD-occluded) heads of participants in a Video See-Through (VST) Mixed Reality local collaborative task could improve their collaboration as well as social presence. We hypothesized that virtual heads would convey more communicative cues (such as eye direction or facial expressions) hidden by the MR HMDs and lead to better collaboration and social presence. To do so, we conducted a between-subject study ($\mathrm{n}=88$) with two independent variables: the type of avatar (CartoonAvatar/RealisticAvatar/NoAvatar) and the level of facial expressions provided (HighExpr/LowExpr). The experiment involved two dyadic communication tasks: (i) the "20-question" game where one participant asks questions to guess a hidden word known by the other participant and (ii) a urban planning problem where participants have to solve a puzzle by collaborating. Each pair of participants performed both tasks using a specific type of avatar and facial animation. Our results indicate that while adding an avatar's head does not necessarily improve social presence, the amount of facial expressions provided through the social interaction does have an impact. Moreover, participants rated their performance higher when observing a realistic avatar but rated the cartoon avatars as less uncanny. Taken together, our results contribute to a better understanding of the role of partial avatars in local MR collaboration and pave the way for further research exploring collaboration in different scenarios, with different avatar types or MR setups.
Collapse
|
9
|
Provenzano L, Gohlke H, Saetta G, Bufalari I, Lenggenhager B, Lesur MR. Fluid face but not gender: Enfacement illusion through digital face filters does not affect gender identity. PLoS One 2024; 19:e0295342. [PMID: 38568979 PMCID: PMC10990241 DOI: 10.1371/journal.pone.0295342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 11/17/2023] [Indexed: 04/05/2024] Open
Abstract
It has been shown that observing a face being touched or moving in synchrony with our own face increases self-identification with the former which might alter both cognitive and affective processes. The induction of this phenomenon, termed enfacement illusion, has often relied on laboratory tools that are unavailable to a large audience. However, digital face filters applications are nowadays regularly used and might provide an interesting tool to study similar mechanisms in a wider population. Digital filters are able to render our faces in real time while changing important facial features, for example, rendering them more masculine or feminine according to normative standards. Recent literature using full-body illusions has shown that participants' own gender identity shifts when embodying a different gendered avatar. Here we studied whether participants' filtered faces, observed while moving in synchrony with their own face, may induce an enfacement illusion and if so, modulate their gender identity. We collected data from 35 female and 33 male participants who observed a stereotypically gender mismatched version of themselves either moving synchronously or asynchronously with their own face on a screen. Our findings showed a successful induction of the enfacement illusion in the synchronous condition according to a questionnaire addressing the feelings of ownership, agency and perceived similarity. However, we found no evidence of gender identity being modulated, neither in explicit nor in implicit measures of gender identification. We discuss the distinction between full-body and facial processing and the relevance of studying widely accessible devices that may impact the sense of a bodily self and our cognition, emotion and behaviour.
Collapse
Affiliation(s)
- Luca Provenzano
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy
| | - Hanna Gohlke
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Gianluca Saetta
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Professorship for Social Brain Sciences, Department of Humanities, Social and Political Sciences, ETH Zurich, Zurich, Switzerland
| | - Ilaria Bufalari
- Department of Psychology of Developmental and Socialization Processes, Sapienza University of Rome, Rome, Italy
| | | | - Marte Roel Lesur
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Computer Science and Engineering, Universidad Carlos III de Madrid, Madrid, Spain
| |
Collapse
|
10
|
Kim S, Kim E. Emergence of the Metaverse and Psychiatric Concerns in Children and Adolescents. Soa Chongsonyon Chongsin Uihak 2023; 34:215-221. [PMID: 37841490 PMCID: PMC10568191 DOI: 10.5765/jkacap.230047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/13/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
Advancements in digital technology have led to increased usage of digital devices among teenagers. The coronavirus disease 2019 pandemic and the subsequent implementation of social distancing policies have further accelerated this change. Consequently, a new concept called the metaverse has emerged. The metaverse is a combination of a virtual reality universe that allows individuals to meet, socialize, work, play, entertain, and create. This review provides an overview of the concept and main features of the metaverse and examples of its utilization in the real world. It also explains the unique developmental characteristics of childhood and adolescence, as well as the possible negative influences of the metaverse on them, including addiction, antisocial behavior, cyberbullying, and identity confusion. This review summarizes several suggestions for future research because the metaverse is a relatively new concept.
Collapse
Affiliation(s)
- Soyeon Kim
- Department of Psychiatry, Gangnam Severance Hospital,
Yonsei University College of Medicine, Seoul, Korea
- Institute of Behavioral Sciences in Medicine, Yonsei
University College of Medicine, Seoul, Korea
| | - Eunjoo Kim
- Department of Psychiatry, Gangnam Severance Hospital,
Yonsei University College of Medicine, Seoul, Korea
- Institute of Behavioral Sciences in Medicine, Yonsei
University College of Medicine, Seoul, Korea
| |
Collapse
|
11
|
Weidner F, Boettcher G, Arboleda SA, Diao C, Sinani L, Kunert C, Gerhardt C, Broll W, Raake A. A Systematic Review on the Visualization of Avatars and Agents in AR & VR displayed using Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2596-2606. [PMID: 37027741 DOI: 10.1109/tvcg.2023.3247072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.
Collapse
|
12
|
La Rocca S, Gobbo S, Tosi G, Fiora E, Daini R. Look at me now! Enfacement illusion over computer-generated faces. Front Hum Neurosci 2023; 17:1026196. [PMID: 36968788 PMCID: PMC10034087 DOI: 10.3389/fnhum.2023.1026196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 02/22/2023] [Indexed: 03/11/2023] Open
Abstract
According to embodied cognition research, one’s bodily self-perception can be illusory and temporarily shifted toward an external body. Similarly, the so-called “enfacement illusion” induced with a synchronous multisensory stimulation over the self-face and an external face can result in implicit and explicit changes in the bodily self. The present study aimed to verify (i) the possibility of eliciting an enfacement illusion over computer-generated faces and (ii) which multisensory stimulation condition was more effective. A total of 23 participants were asked to look at a gender-matched avatar in three synchronous experimental conditions and three asynchronous control conditions (one for each stimulation: visuotactile, visuomotor, and simple exposure). After each condition, participants were asked to complete a questionnaire assessing both the embodiment and the enfacement sensations to address different facets of the illusion. Results suggest a stronger effect of synchronous vs. asynchronous stimulation, and the difference was more pronounced for the embodiment items of the questionnaire. We also found a greater effect of visuotactile and visuomotor stimulations as compared to the simple exposure condition. These findings support the enfacement illusion as a new paradigm to investigate the ownership of different face identities and the specific role of visuotactile and visuomotor stimulations with virtual reality stimuli.
Collapse
Affiliation(s)
- Stefania La Rocca
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
- MiBTec–Mind and Behavior Technological Center, University of Milano–Bicocca, Milan, Italy
- *Correspondence: Stefania La Rocca,
| | - Silvia Gobbo
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
- MiBTec–Mind and Behavior Technological Center, University of Milano–Bicocca, Milan, Italy
| | - Giorgia Tosi
- MiBTec–Mind and Behavior Technological Center, University of Milano–Bicocca, Milan, Italy
- Department of History, Society and Human Studies, University of Salento, Lecce, Italy
| | - Elisa Fiora
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
| | - Roberta Daini
- Department of Psychology, University of Milano–Bicocca, Milan, Italy
- MiBTec–Mind and Behavior Technological Center, University of Milano–Bicocca, Milan, Italy
| |
Collapse
|
13
|
Grewe CM, Liu T, Hildebrandt A, Zachow S. The Open Virtual Mirror Framework for enfacement illusions : Enhancing the sense of agency with avatars that imitate facial expressions. Behav Res Methods 2023; 55:867-882. [PMID: 35501531 PMCID: PMC10027650 DOI: 10.3758/s13428-021-01761-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/25/2021] [Indexed: 11/08/2022]
Abstract
Enfacement illusions are traditionally elicited by visuo-tactile stimulation, but more active paradigms become possible through the usage of virtual reality techniques. For instance, virtual mirrors have been recently proposed to induce enfacement by visuo-motor stimulation. In a virtual mirror experiment, participants interact with an avatar that imitates their facial movements. The active control over the avatar greatly enhances the sense of agency, which is an important ingredient for successful enfacement illusion induction. Due to technological challenges, most virtual mirrors so far were limited to the imitation of the participant's head pose, i.e., its location and rotation. However, stronger experiences of agency can be expected by an increase in the avatar's mimicking abilities. We here present a new open-source framework for virtual mirror experiments, which we call the Open Virtual Mirror Framework (OVMF). The OVMF can track and imitate a large range of facial movements, including pose and expressions. It has been designed to run on standard computer hardware and easily interfaces with existing toolboxes for psychological experimentation, while satisfying the requirement of a tightly controlled experimental setup. Further, it is designed to enable convenient extension of its core functionality such that it can be flexibly adjusted to many different experimental paradigms. We demonstrate the usage of the OVMF and experimentally validate its ability to elicit experiences of agency over an avatar, concluding that the OVMF can serve as a reference for future experiments and that it provides high potential to stimulate new directions in enfacement research and beyond.
Collapse
Affiliation(s)
- C Martin Grewe
- Computational Diagnosis and Therapy Planning Group, Department of Visual and Data-Centric Computing, Zuse Institute Berlin, Takustraße 14, 14195, Berlin, Germany.
| | - Tuo Liu
- Department of Psychology, Carl von Ossietzky Universität Oldenburg, Ammerländer Heerstr. 114-118, 26129, Oldenburg, Germany
| | - Andrea Hildebrandt
- Department of Psychology, Carl von Ossietzky Universität Oldenburg, Ammerländer Heerstr. 114-118, 26129, Oldenburg, Germany
| | - Stefan Zachow
- Computational Diagnosis and Therapy Planning Group, Department of Visual and Data-Centric Computing, Zuse Institute Berlin, Takustraße 14, 14195, Berlin, Germany
| |
Collapse
|
14
|
Bottiroli S, Matamala-Gomez M, Allena M, Guaschino E, Ghiotto N, De Icco R, Sances G, Tassorelli C. The Virtual "Enfacement Illusion" on Pain Perception in Patients Suffering from Chronic Migraine: A Study Protocol for a Randomized Controlled Trial. J Clin Med 2022; 11:6876. [PMID: 36431353 PMCID: PMC9699363 DOI: 10.3390/jcm11226876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/15/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND given the limited efficacy, tolerability, and accessibility of pharmacological treatments for chronic migraine (CM), new complementary strategies have gained increasing attention. Body ownership illusions have been proposed as a non-pharmacological strategy for pain relief. Here, we illustrate the protocol for evaluating the efficacy in decreasing pain perception of the enfacement illusion of a happy face observed through an immersive virtual reality (VR) system in CM. METHOD the study is a double-blind randomized controlled trial with two arms, involving 100 female CM patients assigned to the experimental group or the control group. The experimental group will be exposed to the enfacement illusion, whereas the control group will be exposed to a pleasant immersive virtual environment. Both arms of the trial will consist in three VR sessions (20 min each). At the baseline and at the end of the intervention, the patients will fill in questionnaires based on behavioral measures related to their emotional and psychological state and their body satisfaction. Before and after each VR session, the level of pain, the body image perception, and the affective state will be assessed. DISCUSSION this study will provide knowledge regarding the relationship between internal body representation and pain perception, supporting the effectiveness of the enfacement illusion as a cognitive behavioral intervention in CM.
Collapse
Affiliation(s)
- Sara Bottiroli
- Faculty of Law, Giustino Fortunato University, 82100 Benevento, Italy
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Marta Matamala-Gomez
- Mind and Behavior Technological Center, Department of Psychology, University of Milano-Bicocca, 20126 Milan, Italy
| | - Marta Allena
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Elena Guaschino
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Natascia Ghiotto
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Roberto De Icco
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
- Department of Brain and Behavioral Sciences, University of Pavia, 27100 Pavia, Italy
| | - Grazia Sances
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Cristina Tassorelli
- Headache Science and Neurorehabilitation Center, IRCCS Mondino Foundation, 27100 Pavia, Italy
- Department of Brain and Behavioral Sciences, University of Pavia, 27100 Pavia, Italy
| |
Collapse
|
15
|
Intelligent Animation Creation Method Based on Spatial Separation Perception Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4999478. [PMID: 36172324 PMCID: PMC9512613 DOI: 10.1155/2022/4999478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/23/2022] [Indexed: 11/18/2022]
Abstract
In the computer group animation creation technology, the artificial life method of computer animation overcomes the defects of traditional animation creation technology and greatly improves the animation creation efficiency. However, due to the increasing complexity of the animation character modeling technology used in this method, the coupling degree between the models of the animation system is also increasing, which makes the animation creation increasingly difficult. Especially, when the number of characters' increases, the computation will increase rapidly in a nonlinear way, which greatly affects the real-time animation creation and limits the wide application of this method. In this paper, we have conducted an in-depth study and implementation of the design of the animation character model and its implementation technology, analyzed and designed the group animation character model, and designed the space separation perception algorithm to effectively reduce the design difficulty of the character biomechanical model, reduce the amount of computation, and further ensure the real time of large-scale group animation creation. Therefore, the research reduces the coupling between animation system models without reducing the animation effect and real-time performance. It reduces the amount of computer operation, meets the real-time requirements of large-scale group animation creation, and has important significance and value.
Collapse
|
16
|
Kammler-Sucker KI, Loffler A, Kleinbohl D, Flor H. Exploring Virtual Doppelgangers as Movement Models to Enhance Voluntary Imitation. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2173-2182. [PMID: 34653005 DOI: 10.1109/tnsre.2021.3120795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Virtual Reality (VR) setups offer the possibility to investigate interactions between model and observer characteristics in imitation behavior, such as in the chameleon effect of automatic mimicry. We tested the hypothesis that perceived affiliative characteristics of a virtual model, such as similarity to the observer and likability, will facilitate observers' engagement in voluntary motor imitation. In a within-subjects design, participants were exposed to four virtual characters of different degrees of realism and observer similarity (avatar numbers AN=1-4), ranging from an abstract stickperson to a personalized doppelganger avatar designed from 3d scans of the observer. The characters performed different trunk movements and participants were asked to imitate these. We defined functional ranges of motion (ROM) for spinal extension (bending backward, BB), lateral flexion (bending sideward, BS) and rotation in the horizontal plane (RH) based on shoulder marker trajectories as behavioral indicators of imitation. Participants' ratings on avatar appearance, characteristics and embodiment/ enfacement were recorded in an Autonomous Avatar Questionnaire (AAQ), factorized into three sum scales based on our explorative analysis. Linear mixed effects models revealed that for lateral flexion (BS), a facilitating influence of avatar type on ROM was mediated by perceived identificatory avatar properties such as avatar likability, avatar-observer-similarity and other affiliative characteristics (AAQ1). This suggests that maximization of model-observer similarity with a virtual doppelganger may be useful in observational modeling and this could be used to modify maladaptive motor behaviors in patients with chronic back pain.
Collapse
|
17
|
Wen X, Wang M, Richardt C, Chen ZY, Hu SM. Photorealistic Audio-driven Video Portraits. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3457-3466. [PMID: 32941145 DOI: 10.1109/tvcg.2020.3023573] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Video portraits are common in a variety of applications, such as videoconferencing, news broadcasting, and virtual education and training. We present a novel method to synthesize photorealistic video portraits for an input portrait video, automatically driven by a person's voice. The main challenge in this task is the hallucination of plausible, photorealistic facial expressions from input speech audio. To address this challenge, we employ a parametric 3D face model represented by geometry, facial expression, illumination, etc., and learn a mapping from audio features to model parameters. The input source audio is first represented as a high-dimensional feature, which is used to predict facial expression parameters of the 3D face model. We then replace the expression parameters computed from the original target video with the predicted one, and rerender the reenacted face. Finally, we generate a photorealistic video portrait from the reenacted synthetic face sequence via a neural face renderer. One appealing feature of our approach is the generalization capability for various input speech audio, including synthetic speech audio from text-to-speech software. Extensive experimental results show that our approach outperforms previous general-purpose audio-driven video portrait methods. This includes a user study demonstrating that our results are rated as more realistic than previous methods.
Collapse
|