1
|
González-Gualda LM, Vicente-Querol MA, García AS, Molina JP, Latorre JM, Fernández-Sotos P, Fernández-Caballero A. An exploratory study of the effect of age and gender on face scanning during affect recognition in immersive virtual reality. Sci Rep 2024; 14:5553. [PMID: 38448515 PMCID: PMC10918108 DOI: 10.1038/s41598-024-55774-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 02/26/2024] [Indexed: 03/08/2024] Open
Abstract
A person with impaired emotion recognition is not able to correctly identify facial expressions represented by other individuals. The aim of the present study is to assess eyes gaze and facial emotion recognition in a healthy population using dynamic avatars in immersive virtual reality (IVR). For the first time, the viewing of each area of interest of the face in IVR is studied by gender and age. This work in healthy people is conducted to assess the future usefulness of IVR in patients with deficits in the recognition of facial expressions. Seventy-four healthy volunteers participated in the study. The materials used were a laptop computer, a game controller, and a head-mounted display. Dynamic virtual faces randomly representing the six basic emotions plus neutral expression were used as stimuli. After the virtual human represented an emotion, a response panel was displayed with the seven possible options. Besides storing the hits and misses, the software program internally divided the faces into different areas of interest (AOIs) and recorded how long participants looked at each AOI. As regards the overall accuracy of the participants' responses, hits decreased from the youngest to the middle-aged and older adults. Also, all three groups spent the highest percentage of time looking at the eyes, but younger adults had the highest percentage. It is also noteworthy that attention to the face compared to the background decreased with age. Moreover, the hits between women and men were remarkably similar and, in fact, there were no statistically significant differences between them. In general, men paid more attention to the eyes than women, but women paid more attention to the forehead and mouth. In contrast to previous work, our study indicates that there are no differences between men and women in facial emotion recognition. Moreover, in line with previous work, the percentage of face viewing time for younger adults is higher than for older adults. However, contrary to earlier studies, older adults look more at the eyes than at the mouth.Consistent with other studies, the eyes are the AOI with the highest percentage of viewing time. For men the most viewed AOI is the eyes for all emotions in both hits and misses. Women look more at the eyes for all emotions, except for joy, fear, and anger on hits. On misses, they look more into the eyes for almost all emotions except surprise and fear.
Collapse
Affiliation(s)
- Luz M González-Gualda
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
| | - Miguel A Vicente-Querol
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
| | - Arturo S García
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José P Molina
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José M Latorre
- Departmento de Psicología, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain
| | - Antonio Fernández-Caballero
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain.
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain.
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain.
| |
Collapse
|
2
|
Martarelli CS, Chiquet S, Ertl M. Keeping track of reality: embedding visual memory in natural behaviour. Memory 2023; 31:1295-1305. [PMID: 37727126 DOI: 10.1080/09658211.2023.2260148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/21/2023] [Indexed: 09/21/2023]
Abstract
Since immersive virtual reality (IVR) emerged as a research method in the 1980s, the focus has been on the similarities between IVR and actual reality. In this vein, it has been suggested that IVR methodology might fill the gap between laboratory studies and real life. IVR allows for high internal validity (i.e., a high degree of experimental control and experimental replicability), as well as high external validity by letting participants engage with the environment in an almost natural manner. Despite internal validity being crucial to experimental designs, external validity also matters in terms of the generalizability of results. In this paper, we first highlight and summarise the similarities and differences between IVR, desktop situations (both non-immersive VR and computer experiments), and reality. In the second step, we propose that IVR is a promising tool for visual memory research in terms of investigating the representation of visual information embedded in natural behaviour. We encourage researchers to carry out experiments on both two-dimensional computer screens and in immersive virtual environments to investigate visual memory and validate and replicate the findings. IVR is valuable because of its potential to improve theoretical understanding and increase the psychological relevance of the findings.
Collapse
Affiliation(s)
| | - Sandra Chiquet
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Matthias Ertl
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
3
|
Vicente-Querol MA, Fernández-Caballero A, González P, González-Gualda LM, Fernández-Sotos P, Molina JP, García AS. Effect of Action Units, Viewpoint and Immersion on Emotion Recognition Using Dynamic Virtual Faces. Int J Neural Syst 2023; 33:2350053. [PMID: 37746831 DOI: 10.1142/s0129065723500533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.
Collapse
Affiliation(s)
- Miguel A Vicente-Querol
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| | - Antonio Fernández-Caballero
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
| | - Pascual González
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
| | - Luz M González-Gualda
- Servicio de Salud Mental, Complejo Hospitalario, Universitario de Albacete, Albacete 02004, Spain
| | - Patricia Fernández-Sotos
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
- Servicio de Salud Mental, Complejo Hospitalario, Universitario de Albacete, Albacete 02004, Spain
| | - José P Molina
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| | - Arturo S García
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| |
Collapse
|
4
|
Mascaró-Oliver M, Amengual-Alcover E, Roig-Maimó MF, Mas-Sansó R. UIBVFEDPlus-Light: Virtual facial expression dataset with lighting. PLoS One 2023; 18:e0287006. [PMID: 37773958 PMCID: PMC10540961 DOI: 10.1371/journal.pone.0287006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/26/2023] [Indexed: 10/01/2023] Open
Abstract
It is well-known that lighting conditions have an important influence on the automatic recognition of human expressions. Although the impact of lighting on the perception of emotions has been studied in different works, databases of facial expressions do not consider intentional lighting. In this work, a new database of facial expressions performed by virtual characters with four different lighting configurations is presented. This database, named UIBVFEDPlus-Light, is an extension of the previously published UIBVFED virtual facial expression dataset. It includes 100 characters, four lighting configurations and a software application that allows one to interactively visualize the expressions, and manage their intensity and lighting condition. Also, an experience of use is described to show how this work can raise new challenges to facial expression and emotion recognition techniques under usual lighting environments. Thus, opening new study perspectives in this area.
Collapse
Affiliation(s)
- Miquel Mascaró-Oliver
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Esperança Amengual-Alcover
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Maria Francesca Roig-Maimó
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Ramon Mas-Sansó
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| |
Collapse
|
5
|
Rodríguez-Guidonet I, Andrade-Pino P, Monfort-Vinuesa C, Rincon E. Avatar-Based Strategies for Breast Cancer Patients: A Systematic Review. Cancers (Basel) 2023; 15:4031. [PMID: 37627059 PMCID: PMC10452070 DOI: 10.3390/cancers15164031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/22/2023] [Accepted: 07/31/2023] [Indexed: 08/27/2023] Open
Abstract
There is a lack of studies to determine if avatar-based protocols could be considered an efficient and accurate strategy to improve psychological well-being in oncology patients, even though it represents a growing field of research. To the best of our knowledge, this is the first systematic review addressing the effectiveness of avatar-based treatments to enhance quality of life (QoL) and psychological well-being in breast cancer patients. The purpose of this study was to review the scientific literature of those studies involving avatar-based technology and breast cancer patients in order to answer the following questions. (1) Are avatar-based strategies useful to im-prove QoL and psychological well-being (anxiety and depression symptoms) in breast cancer patients? (2) Which is the best way to develop avatar-based protocols for breast cancer patients? We conducted a systematic review of the peer-reviewed literature from EBSCO, Ovid, PubMed, Scopus, and Web of Science (WOS), following the PRISMA statements and using "avatar + breast cancer" or "avatar + cancer" as keywords. Studies which were published in either English or Spanish and which addressed QoL and psychological well-being in breast cancer patients were reviewed. The results will contribute to developing innovative avatar-based strategies focused on breast cancer patients.
Collapse
Affiliation(s)
| | - Paula Andrade-Pino
- Psycho-Technology Lab, Universidad San Pablo-CEU, CEU Universities, 28005 Madrid, Spain
| | - Carlos Monfort-Vinuesa
- Psycho-Technology Lab, Universidad San Pablo-CEU, CEU Universities, 28005 Madrid, Spain
- Departamento de Psicología y Pedagogía, Facultad de Medicina, Universidad San Pablo-CEU, CEU Universities, Urbanización Montepríncipe, 28005 Madrid, Spain
- Departamento de Medicina Interna, HM Hospital, Universidad San Pablo-CEU, CEU Universities, 28005 Madrid, Spain
| | - Esther Rincon
- Psycho-Technology Lab, Universidad San Pablo-CEU, CEU Universities, 28005 Madrid, Spain
- Departamento de Psicología y Pedagogía, Facultad de Medicina, Universidad San Pablo-CEU, CEU Universities, Urbanización Montepríncipe, 28005 Madrid, Spain
| |
Collapse
|
6
|
UIBVFED-Mask: A Dataset for Comparing Facial Expressions with and without Face Masks. DATA 2023. [DOI: 10.3390/data8010017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
After the COVID-19 pandemic the use of face masks has become a common practice in many situations. Partial occlusion of the face due to the use of masks poses new challenges for facial expression recognition because of the loss of significant facial information. Consequently, the identification and classification of facial expressions can be negatively affected when using neural networks in particular. This paper presents a new dataset of virtual characters, with and without face masks, with identical geometric information and spatial location. This novelty will certainly allow researchers a better refinement on lost information due to the occlusion of the mask.
Collapse
|
7
|
del Castillo Torres G, Roig-Maimó MF, Mascaró-Oliver M, Amengual-Alcover E, Mas-Sansó R. Understanding How CNNs Recognize Facial Expressions: A Case Study with LIME and CEM. SENSORS (BASEL, SWITZERLAND) 2022; 23:131. [PMID: 36616728 PMCID: PMC9824600 DOI: 10.3390/s23010131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/22/2022] [Accepted: 12/18/2022] [Indexed: 06/17/2023]
Abstract
Recognizing facial expressions has been a persistent goal in the scientific community. Since the rise of artificial intelligence, convolutional neural networks (CNN) have become popular to recognize facial expressions, as images can be directly used as input. Current CNN models can achieve high recognition rates, but they give no clue about their reasoning process. Explainable artificial intelligence (XAI) has been developed as a means to help to interpret the results obtained by machine learning models. When dealing with images, one of the most-used XAI techniques is LIME. LIME highlights the areas of the image that contribute to a classification. As an alternative to LIME, the CEM method appeared, providing explanations in a way that is natural for human classification: besides highlighting what is sufficient to justify a classification, it also identifies what should be absent to maintain it and to distinguish it from another classification. This study presents the results of comparing LIME and CEM applied over complex images such as facial expression images. While CEM could be used to explain the results on images described with a reduced number of features, LIME would be the method of choice when dealing with images described with a huge number of features.
Collapse
|
8
|
García AS, Fernández-Sotos P, González P, Navarro E, Rodriguez-Jimenez R, Fernández-Caballero A. Behavioral intention of mental health practitioners toward the adoption of virtual humans in affect recognition training. Front Psychol 2022; 13:934880. [PMCID: PMC9600723 DOI: 10.3389/fpsyg.2022.934880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
This paper explores the key factors influencing mental health professionals' behavioral intention to adopt virtual humans as a means of affect recognition training. Therapies targeting social cognition deficits are in high demand given that these deficits are related to a loss of functioning and quality of life in several neuropsychiatric conditions such as schizophrenia, autism spectrum disorders, affective disorders, and acquired brain injury. Therefore, developing new therapies would greatly improve the quality of life of this large cohort of patients. A questionnaire based on the second revision of the Unified Theory of Acceptance and Use of Technology (UTAUT2) questionnaire was used for this study. One hundred and twenty-four mental health professionals responded to the questionnaire after viewing a video presentation of the system. The results confirmed that mental health professionals showed a positive intention to use virtual reality tools to train affect recognition, as they allow manipulation of social interaction with patients. Further studies should be conducted with therapists from other countries to reach more conclusions.
Collapse
Affiliation(s)
- Arturo S. García
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud Mental, Complejo Hospitalario Universitario de Albacete, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Pascual González
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Elena Navarro
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Roberto Rodriguez-Jimenez
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- Cognición y Psicosis, Area de Neurociencias y Salud Mental, Instituto de Investigación Sanitaria Hospital 12 de Octubre (imas12), Madrid, Spain
- CogPsy-Group, Universidad Complutense de Madrid, Madrid, Spain
| | - Antonio Fernández-Caballero
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- *Correspondence: Antonio Fernández-Caballero
| |
Collapse
|
9
|
Vicente-Querol MA, Fernandez-Caballero A, Molina JP, Gonzalez-Gualda LM, Fernandez-Sotos P, Garcia AS. Facial Affect Recognition in Immersive Virtual Reality: Where Is the Participant Looking? Int J Neural Syst 2022; 32:2250029. [DOI: 10.1142/s0129065722500290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
10
|
He D, Cao S, Le Y, Wang M, Chen Y, Qian B. Virtual Reality Technology in Cognitive Rehabilitation Application: A Bibliometric Analysis (Preprint). JMIR Serious Games 2022; 10:e38315. [PMID: 36260388 PMCID: PMC9631168 DOI: 10.2196/38315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/30/2022] [Accepted: 09/02/2022] [Indexed: 12/04/2022] Open
Abstract
Background In recent years, with the development of computer science and medical science, virtual reality (VR) technology has become a promising tool for improving cognitive function. Research on VR-based cognitive training has garnered increasing attention. Objective This study aimed to investigate the application status, research hot spots, and emerging trends of VR in cognitive rehabilitation over the past 20 years. Methods Articles on VR-based cognitive rehabilitation from 2001 to 2021 were retrieved from the Web of Science Core Collection. CiteSpace software was used for the visual analysis of authors and countries or regions, and Scimago Graphica software was used for the geographic visualization of published countries or regions. Keywords were clustered using the gCLUTO software. Results A total of 1259 papers were included. In recent years, research on the application of VR in cognitive rehabilitation has been widely conducted, and the annual publication of relevant literature has shown a positive trend. The main research areas include neuroscience and neurology, psychology, computer science, and rehabilitation. The United States ranked first with 328 papers, and Italy ranked second with 140 papers. Giuseppe Riva, an Italian academic, was the most prolific author with 29 publications. The most frequently cited reference was “Using Reality to Characterize Episodic Memory Profiles in Amnestic Mild Cognitive Impairment and Alzheimer’s Disease: Influence of Active and Passive Encoding.” The most common keywords used by researchers include “virtual reality,” “cognition,” “rehabilitation,” “performance,” and “older adult.” The largest source of research funding is from the public sector in the United States. Conclusions The bibliometric analysis provided an overview of the application of VR in cognitive rehabilitation. VR-based cognitive rehabilitation can be integrated into multiple disciplines. We conclude that, in the context of the COVID-19 pandemic, the development of VR-based telerehabilitation is crucial, and there are still many problems that need to be addressed, such as the lack of consensus on treatment methods and the existence of safety hazards.
Collapse
Affiliation(s)
- Danni He
- School of Nursing, Hangzhou Normal University, Hangzhou, China
| | - Shihua Cao
- Nursing Department, Hangzhou Normal University Qianjiang College, Hangzhou, China
| | - Yuchao Le
- School of Nursing, Hangzhou Normal University, Hangzhou, China
| | - Mengxin Wang
- School of Nursing, Hangzhou Normal University, Hangzhou, China
| | - Yanfei Chen
- School of Nursing, Hangzhou Normal University, Hangzhou, China
| | - Beiying Qian
- School of Nursing, Hangzhou Normal University, Hangzhou, China
| |
Collapse
|