1
|
Osorio Quero C, Durini D, Rangel-Magdaleno J, Martinez-Carranza J, Ramos-Garcia R. Enhancing 3D human pose estimation with NIR single-pixel imaging and time-of-flight technology: a deep learning approach. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:414-423. [PMID: 38437432 DOI: 10.1364/josaa.499933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 01/09/2024] [Indexed: 03/06/2024]
Abstract
The extraction of 3D human pose and body shape details from a single monocular image is a significant challenge in computer vision. Traditional methods use RGB images, but these are constrained by varying lighting and occlusions. However, cutting-edge developments in imaging technologies have introduced new techniques such as single-pixel imaging (SPI) that can surmount these hurdles. In the near-infrared spectrum, SPI demonstrates impressive capabilities in capturing a 3D human pose. This wavelength can penetrate clothing and is less influenced by lighting variations than visible light, thus providing a reliable means to accurately capture body shape and pose data, even in difficult settings. In this work, we explore the use of an SPI camera operating in the NIR with time-of-flight (TOF) at bands 850-1550 nm as a solution to detect humans in nighttime environments. The proposed system uses the vision transformers (ViT) model to detect and extract the characteristic features of humans for integration over a 3D body model SMPL-X through 3D body shape regression using deep learning. To evaluate the efficacy of NIR-SPI 3D image reconstruction, we constructed a laboratory scenario that simulates nighttime conditions, enabling us to test the feasibility of employing NIR-SPI as a vision sensor in outdoor environments. By assessing the results obtained from this setup, we aim to demonstrate the potential of NIR-SPI as an effective tool to detect humans in nighttime scenarios and capture their accurate 3D body pose and shape.
Collapse
|
2
|
Zhu A, Boonipat T, Cherukuri S, Bite U. Defining Standard Values for FaceReader Facial Expression Software Output. Aesthetic Plast Surg 2024; 48:785-792. [PMID: 37460734 DOI: 10.1007/s00266-023-03468-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 06/10/2023] [Indexed: 04/01/2024]
Abstract
BACKGROUND FaceReader is a validated software package that uses computer vision technology for facial expression recognition which has become increasingly popular in academic research to expedite, scale, and decrease the cost of facial emotion analysis. In this study, we compare FaceReader analysis to human evaluator interpretation in order to define standard values for the software output. METHODS Randomly generated facial images produced by generative adversarial networks were analyzed using FaceReader and by survey participants (n=496). The age, facial emotion, and intensity of emotion as determined by the software and survey participants were recorded. Results were analyzed and compared. RESULTS 80 randomly generated images (20 children, 20 young adult, 20 middle aged, and 20 elderly; 38 male and 42 female) were included. Analysis of correlation between most common expression identified by FaceReader and the primary emotion detected by surveyors showed strong correlation (κ = 0.77, 95% CI = 0.64-0.91). On analyzing this correlation by age group, there was fair correlation in children (κ = 0.40, 95% CI = 0.078-0.72), perfect correlation in young adults (κ = 1.0, 95% CI = 1.0-1.0), strong correlation in middle aged adults (κ = 0.79, 95% CI = 0.53-1) and near perfect in elderly adults(κ = 0.9 , 95% CI = 0.7-1.0). CONCLUSIONS We provided the first study defining the expected average values generated by FaceReader in generally smiling images. This can be used as a standard in future studies. LEVEL OF EVIDENCE IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Agnes Zhu
- Mayo Clinic Alix School of Medicine, 200 1st ST. SW, Rochester, MN, 55905, USA.
| | | | - Sai Cherukuri
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Uldis Bite
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
3
|
Mishra C, Verdonschot R, Hagoort P, Skantze G. Real-time emotion generation in human-robot dialogue using large language models. Front Robot AI 2023; 10:1271610. [PMID: 38106543 PMCID: PMC10722897 DOI: 10.3389/frobt.2023.1271610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 11/08/2023] [Indexed: 12/19/2023] Open
Abstract
Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot's turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot's emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.
Collapse
Affiliation(s)
- Chinmaya Mishra
- Furhat Robotics AB, Stockholm, Sweden
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | | | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Gabriel Skantze
- Furhat Robotics AB, Stockholm, Sweden
- Division of Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
4
|
Cascella M, Vitale VN, Mariani F, Iuorio M, Cutugno F. Development of a binary classifier model from extended facial codes toward video-based pain recognition in cancer patients. Scand J Pain 2023; 23:638-645. [PMID: 37665749 DOI: 10.1515/sjpain-2023-0011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 08/12/2023] [Indexed: 09/06/2023]
Abstract
OBJECTIVES The Automatic Pain Assessment (APA) relies on the exploitation of objective methods to evaluate the severity of pain and other pain-related characteristics. Facial expressions are the most investigated pain behavior features for APA. We constructed a binary classifier model for discriminating between the absence and presence of pain through video analysis. METHODS A brief interview lasting approximately two-minute was conducted with cancer patients, and video recordings were taken during the session. The Delaware Pain Database and UNBC-McMaster Shoulder Pain dataset were used for training. A set of 17 Action Units (AUs) was adopted. For each image, the OpenFace toolkit was used to extract the considered AUs. The collected data were grouped and split into train and test sets: 80 % of the data was used as a training set and the remaining 20 % as the validation set. For continuous estimation, the entire patient video with frame prediction values of 0 (no pain) or 1 (pain), was imported into an annotator (ELAN 6.4). The developed Neural Network classifier consists of two dense layers. The first layer contains 17 nodes associated with the facial AUs extracted by OpenFace for each image. The output layer is a classification label of "pain" (1) or "no pain" (0). RESULTS The classifier obtained an accuracy of ∼94 % after about 400 training epochs. The Area Under the ROC curve (AUROC) value was approximately 0.98. CONCLUSIONS This study demonstrated that the use of a binary classifier model developed from selected AUs can be an effective tool for evaluating cancer pain. The implementation of an APA classifier can be useful for detecting potential pain fluctuations. In the context of APA research, further investigations are necessary to refine the process and particularly to combine this data with multi-parameter analyses such as speech analysis, text analysis, and data obtained from physiological parameters.
Collapse
Affiliation(s)
- Marco Cascella
- Department of Anesthesia and Pain Medicine, Istituto Nazionale Tumori, IRCCS - Fondazione G Pascale, Naples, Italy
| | | | - Fabio Mariani
- DIETI, University of Naples "Federico II", Naples, Italy
| | - Manuel Iuorio
- DIETI, University of Naples "Federico II", Naples, Italy
| | | |
Collapse
|
5
|
Korcsok B, Korondi P. How do you do the things that you do? Ethological approach to the description of robot behaviour. Biol Futur 2023; 74:253-279. [PMID: 37812380 DOI: 10.1007/s42977-023-00178-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/10/2023]
Abstract
The detailed description of behaviour of the interacting parties is becoming more and more important in human-robot interaction (HRI), especially in social robotics (SR). With the rise in the number of publications, there is a substantial need for the objective and comprehensive description of implemented robot behaviours to ensure comparability and reproducibility of the studies. Ethograms and the meticulous analysis of behaviour was introduced long ago in animal behaviour research (cf. ethology). The adoption of this method in SR and HRI can ensure the desired clarity over robot behaviours, while also providing added benefits during robot development, behaviour modelling and analysis of HRI experiments. We provide an overview of the possible uses and advantages of ethograms in HRI, and propose a general framework for describing behaviour which can be adapted to the requirements of specific studies.
Collapse
Affiliation(s)
- Beáta Korcsok
- ELKH-ELTE Comparative Ethology Research Group, Budapest, Hungary.
- Department of Mechatronics, Optics and Mechanical Engineering Informatics, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Budapest, Hungary.
| | - Péter Korondi
- Department of Mechatronics, Faculty of Engineering, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
6
|
Zhao W, Liu Q, Zhang X, Song X, Zhang Z, Qing P, Liu X, Zhu S, Yang W, Kendrick KM. Differential responses in the mirror neuron system during imitation of individual emotional facial expressions and association with autistic traits. Neuroimage 2023; 277:120263. [PMID: 37399932 DOI: 10.1016/j.neuroimage.2023.120263] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/15/2023] [Accepted: 06/30/2023] [Indexed: 07/05/2023] Open
Abstract
The mirror neuron system (MNS), including the inferior frontal gyrus (IFG), inferior parietal lobule (IPL) and superior temporal sulcus (STS) plays an important role in action representation and imitation and may be dysfunctional in autism spectrum disorder (ASD). However, it's not clear how these three regions respond and interact during the imitation of different basic facial expressions and whether the pattern of responses is influenced by autistic traits. Thus, we conducted a natural facial expression (happiness, angry, sadness and fear) imitation task in 100 healthy male subjects where expression intensity was measured using facial emotion recognition software (FaceReader) and MNS responses were recorded using functional near-infrared spectroscopy (fNIRS). Autistic traits were measured using the Autism Spectrum Quotient questionnaire. Results showed that imitation of happy expressions produced the highest expression intensity but a small deactivation in MNS responses, suggesting a lower processing requirement compared to other expressions. A cosine similarity analysis indicated a distinct pattern of MNS responses during imitation of each facial expression with functional intra-hemispheric connectivity between the left IPL and left STS being significantly higher during happy compared to other expressions, while inter-hemispheric connectivity between the left and right IPL differed between imitation of fearful and sad expressions. Furthermore, functional connectivity changes during imitation of each different expression could reliably predict autistic trait scores. Overall, the results provide evidence for distinct patterns of functional connectivity changes between MNS regions during imitation of different emotions which are also associated with autistic traits.
Collapse
Affiliation(s)
- Weihua Zhao
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China; Institute of Electronic and Information Engineering of UESTC in Guangdong, Dongguan, 523808, China
| | - Qi Liu
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiaolu Zhang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xinwei Song
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhao Zhang
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Peng Qing
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiaolong Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, China
| | - Siyu Zhu
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Wenxu Yang
- Chengdu Women's and Children's Central Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Keith M Kendrick
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
7
|
Hebel NSD, Boonipat T, Lin J, Shapiro D, Bite U. Artificial Intelligence in Surgical Evaluation: A Study of Facial Rejuvenation Techniques. Aesthet Surg J Open Forum 2023; 5:ojad032. [PMID: 37228317 PMCID: PMC10205049 DOI: 10.1093/asjof/ojad032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023] Open
Abstract
Background Aesthetic facial surgeries historically rely on subjective analysis in determining success; this limits objective comparison of surgical outcomes. Objectives This case study exemplifies the use of an artificial intelligence software on objectively analyzing facial rejuvenation techniques with the aim of reducing subjective bias. Methods Retrospectively, all patients who underwent facial rejuvenation surgery with concomitant procedures from 2015 to 2017 were included (n = 32). Patients were categorized into Groups A to C: Group A-10 superficial musculoaponeurotic system (SMAS) plication facelift (n = 10), Group B-SMASectomy facelift (n = 7), and Group C-high SMAS facelift (n = 15). Neutral repose images preoperatively and postoperatively (average >3 months) were analyzed using artificial intelligence for emotion and action unit alterations. Results Postoperatively, Group A experienced a decrease in happiness by 0.84% and a decrease in anger by 6.87% (P >> .1). Group B had an increase in happiness by 0.77% and an increase in anger by 1.91% (P >> .1). Both Group A and Group B did not show any discernable action unit patterns. In Group C, the lip corner puller AU increased in average intensity from 0% to 18.7%. This correlated with an average increase in detected happiness from 1.03% to 13.17% (P = .008). Conversely, the average detected anger decreased from 14.66% to 0.63% (P = .032). Conclusions This study provides the first proof of concept for the use of a machine learning software application to objectively assess various aesthetic surgical outcomes in facial rejuvenation. Due to limitations in patient heterogeneity, this study does not claim one technique's superiority but serves as a conceptual foundation for future investigation. Level of Evidence 4
Collapse
Affiliation(s)
| | | | | | | | - Uldis Bite
- Corresponding Author: Dr Uldis Bite, Division of Plastic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA. E-mail:
| |
Collapse
|
8
|
Mai HN, Win TT, Tong MS, Lee CH, Lee KB, Kim SY, Lee HW, Lee DH. Three-dimensional morphometric analysis of facial units in virtual smiling facial images with different smile expressions. J Adv Prosthodont 2023; 15:1-10. [PMID: 36908751 PMCID: PMC9992697 DOI: 10.4047/jap.2023.15.1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 12/06/2022] [Accepted: 01/31/2023] [Indexed: 03/06/2023] Open
Abstract
PURPOSE Accuracy of image matching between resting and smiling facial models is affected by the stability of the reference surfaces. This study aimed to investigate the morphometric variations in subdivided facial units during resting, posed and spontaneous smiling. MATERIALS AND METHODS The posed and spontaneous smiling faces of 33 adults were digitized and registered to the resting faces. The morphological changes of subdivided facial units at the forehead (upper and lower central, upper and lower lateral, and temple), nasal (dorsum, tip, lateral wall, and alar lobules), and chin (central and lateral) regions were assessed by measuring the 3D mesh deviations between the smiling and resting facial models. The one-way analysis of variance, Duncan post hoc tests, and Student's t-test were used to determine the differences among the groups (α = .05). RESULTS The smallest morphometric changes were observed at the upper and central forehead and nasal dorsum; meanwhile, the largest deviation was found at the nasal alar lobules in both the posed and spontaneous smiles (P < .001). The spontaneous smile generally resulted in larger facial unit changes than the posed smile, and significant difference was observed at the alar lobules, central chin, and lateral chin units (P < .001). CONCLUSION The upper and central forehead and nasal dorsum are reliable areas for image matching between resting and smiling 3D facial images. The central chin area can be considered an additional reference area for posed smiles; however, special cautions should be taken when selecting this area as references for spontaneous smiles.
Collapse
Affiliation(s)
- Hang-Nga Mai
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Dental School of Hanoi University of business and technology, Hanoi, Vietnam
| | - Thaw Thaw Win
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Minh Son Tong
- School of Dentistry, Hanoi Medical University, Hanoi, Vietnam
| | - Cheong-Hee Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Kyu-Bok Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - So-Yeun Kim
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Hyun-Woo Lee
- Department of Oral and Maxillofacial Surgery, Uijeongbu Eulji Medical Center, Eulji University School of Dentistry, Uijeongbu, Republic of Korea
| | - Du-Hyeong Lee
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| |
Collapse
|
9
|
Alhameed M, Jeribi F, Elnaim BME, Hossain MA, Abdelhag ME. Pandemic disease detection through wireless communication using infrared image based on deep learning. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:1083-1105. [PMID: 36650803 DOI: 10.3934/mbe.2023050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Rapid diagnosis to test diseases, such as COVID-19, is a significant issue. It is a routine virus test in a reverse transcriptase-polymerase chain reaction. However, a test like this takes longer to complete because it follows the serial testing method, and there is a high chance of a false-negative ratio (FNR). Moreover, there arises a deficiency of R.T.-PCR test kits. Therefore, alternative procedures for a quick and accurate diagnosis of patients are urgently needed to deal with these pandemics. The infrared image is self-sufficient for detecting these diseases by measuring the temperature at the initial stage. C.T. scans and other pathological tests are valuable aspects of evaluating a patient with a suspected pandemic infection. However, a patient's radiological findings may not be identified initially. Therefore, we have included an Artificial Intelligence (A.I.) algorithm-based Machine Intelligence (MI) system in this proposal to combine C.T. scan findings with all other tests, symptoms, and history to quickly diagnose a patient with a positive symptom of current and future pandemic diseases. Initially, the system will collect information by an infrared camera of the patient's facial regions to measure temperature, keep it as a record, and complete further actions. We divided the face into eight classes and twelve regions for temperature measurement. A database named patient-info-mask is maintained. While collecting sample data, we incorporate a wireless network using a cloudlets server to make processing more accessible with minimal infrastructure. The system will use deep learning approaches. We propose convolution neural networks (CNN) to cross-verify the collected data. For better results, we incorporated tenfold cross-verification into the synthesis method. As a result, our new way of estimating became more accurate and efficient. We achieved 3.29% greater accuracy by incorporating the "decision tree level synthesis method" and "ten-folded-validation method". It proves the robustness of our proposed method.
Collapse
Affiliation(s)
| | - Fathe Jeribi
- College of CS & IT, Jazan University, Jazan, Saudi Arabia
| | | | | | | |
Collapse
|
10
|
Emotion Recognition of Down Syndrome People Based on the Evaluation of Artificial Intelligence and Statistical Analysis Methods. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This article presents a study based on evaluating different techniques to automatically recognize the basic emotions of people with Down syndrome, such as anger, happiness, sadness, surprise, and neutrality, as well as the statistical analysis of the Facial Action Coding System, determine the symmetry of the Action Units present in each emotion, identify the facial features that represent this group of people. First, a dataset of images of faces of people with Down syndrome classified according to their emotions is built. Then, the characteristics of facial micro-expressions (Action Units) present in the feelings of the target group through statistical analysis are evaluated. This analysis uses the intensity values of the most representative exclusive action units to classify people’s emotions. Subsequently, the collected dataset was evaluated using machine learning and deep learning techniques to recognize emotions. In the beginning, different supervised learning techniques were used, with the Support Vector Machine technique obtaining the best precision with a value of 66.20%. In the case of deep learning methods, the mini-Xception convolutional neural network was used to recognize people’s emotions with typical development, obtaining an accuracy of 74.8%.
Collapse
|
11
|
Othmani A, Zeghina AO, Muzammel M. A Model of Normality Inspired Deep Learning Framework for Depression Relapse Prediction Using Audiovisual Data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107132. [PMID: 36183638 DOI: 10.1016/j.cmpb.2022.107132] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 09/04/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Depression (Major Depressive Disorder) is one of the most common mental illnesses. According to the World Health Organization, more than 300 million people in the world are affected. A first depressive episode can be solved by a spontaneous remission within 6 to 12 months. It has been shown that depression affects speech production and facial expressions. Although numerous studies are proposed in the literature for depression recognition using audiovisual cues, depression relapse using audiovisual cues has not been studied in the literature. METHOD In this paper, we propose a deep learning-based approach for depression recognition and depression relapse prediction using audiovisual data. For more versatility and reusability, the proposed approach is based on a Model of Normality inspired framework where we define depression relapse by the closeness of the audiovisual patterns of a subject after a symptom-free period to the audiovisual patterns of depressed subjects. A model of Normality is an anomaly detection distance-based approach that computes a distance of normality between the deep audiovisual encoding of a test sample and a learned representation from audiovisual encodings of anomaly-free data. RESULTS The proposed approach shows a very promising results with an accuracy of 87.4% and a F1-score of 82.3% for relapse/depression prediction using a Leave-One-Subject-Out training strategy on the DAIC-Woz dataset. CONCLUSION The proposed model of normality-based framework is accurate in detecting depression and in predicting depression relapse. A prospective monitoring system is proposed for assisting depressed patients. The proposed framework is easily extensible and others modalities will be integrated in future works.
Collapse
Affiliation(s)
- Alice Othmani
- Université Paris-Est Créteil (UPEC), LISSI, Vitry sur Seine 94400, France.
| | | | - Muhammad Muzammel
- Université Paris-Est Créteil (UPEC), LISSI, Vitry sur Seine 94400, France
| |
Collapse
|
12
|
Domínguez-Oliva A, Mota-Rojas D, Hernández-Avalos I, Mora-Medina P, Olmos-Hernández A, Verduzco-Mendoza A, Casas-Alvarado A, Whittaker AL. The neurobiology of pain and facial movements in rodents: Clinical applications and current research. Front Vet Sci 2022; 9:1016720. [PMID: 36246319 PMCID: PMC9556725 DOI: 10.3389/fvets.2022.1016720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 09/12/2022] [Indexed: 11/30/2022] Open
Abstract
One of the most controversial aspects of the use of animals in science is the production of pain. Pain is a central ethical concern. The activation of neural pathways involved in the pain response has physiological, endocrine, and behavioral consequences, that can affect both the health and welfare of the animals, as well as the validity of research. The strategy to prevent these consequences requires understanding of the nociception process, pain itself, and how assessment can be performed using validated, non-invasive methods. The study of facial expressions related to pain has undergone considerable study with the finding that certain movements of the facial muscles (called facial action units) are associated with the presence and intensity of pain. This review, focused on rodents, discusses the neurobiology of facial expressions, clinical applications, and current research designed to better understand pain and the nociceptive pathway as a strategy for implementing refinement in biomedical research.
Collapse
Affiliation(s)
- Adriana Domínguez-Oliva
- Master in Science Program “Maestría en Ciencias Agropecuarias”, Universidad Autónoma Metropolitana, Mexico City, Mexico
| | - Daniel Mota-Rojas
- Neurophysiology, Behavior and Animal Welfare Assesment, DPAA, Universidad Autónoma Metropolitana, Mexico City, Mexico
- *Correspondence: Daniel Mota-Rojas
| | - Ismael Hernández-Avalos
- Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México, Cuautitlán Izcalli, Mexico
| | - Patricia Mora-Medina
- Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México, Cuautitlán Izcalli, Mexico
| | - Adriana Olmos-Hernández
- Division of Biotechnology-Bioterio and Experimental Surgery, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | - Antonio Verduzco-Mendoza
- Division of Biotechnology-Bioterio and Experimental Surgery, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | - Alejandro Casas-Alvarado
- Neurophysiology, Behavior and Animal Welfare Assesment, DPAA, Universidad Autónoma Metropolitana, Mexico City, Mexico
| | - Alexandra L. Whittaker
- School of Animal and Veterinary Sciences, The University of Adelaide, Roseworthy, SA, Australia
| |
Collapse
|
13
|
Ullah U, Lee JS, An CH, Lee H, Park SY, Baek RH, Choi HC. A Review of Multi-Modal Learning from the Text-Guided Visual Processing Viewpoint. SENSORS (BASEL, SWITZERLAND) 2022; 22:6816. [PMID: 36146161 PMCID: PMC9503702 DOI: 10.3390/s22186816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/03/2022] [Accepted: 07/24/2022] [Indexed: 06/16/2023]
Abstract
For decades, co-relating different data domains to attain the maximum potential of machines has driven research, especially in neural networks. Similarly, text and visual data (images and videos) are two distinct data domains with extensive research in the past. Recently, using natural language to process 2D or 3D images and videos with the immense power of neural nets has witnessed a promising future. Despite the diverse range of remarkable work in this field, notably in the past few years, rapid improvements have also solved future challenges for researchers. Moreover, the connection between these two domains is mainly subjected to GAN, thus limiting the horizons of this field. This review analyzes Text-to-Image (T2I) synthesis as a broader picture, Text-guided Visual-output (T2Vo), with the primary goal being to highlight the gaps by proposing a more comprehensive taxonomy. We broadly categorize text-guided visual output into three main divisions and meaningful subdivisions by critically examining an extensive body of literature from top-tier computer vision venues and closely related fields, such as machine learning and human-computer interaction, aiming at state-of-the-art models with a comparative analysis. This study successively follows previous surveys on T2I, adding value by analogously evaluating the diverse range of existing methods, including different generative models, several types of visual output, critical examination of various approaches, and highlighting the shortcomings, suggesting the future direction of research.
Collapse
Affiliation(s)
- Ubaid Ullah
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| | - Jeong-Sik Lee
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| | - Chang-Hyeon An
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| | - Hyeonjin Lee
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| | - Su-Yeong Park
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| | - Rock-Hyun Baek
- Department of Electrical Engineering, Pohang University of Science and Technology, Pohang 37673, Korea
| | - Hyun-Chul Choi
- Intelligent Computer Vision Software Laboratory (ICVSLab), Department of Electronic Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Korea
| |
Collapse
|
14
|
Test–Retest Reliability in Automated Emotional Facial Expression Analysis: Exploring FaceReader 8.0 on Data from Typically Developing Children and Children with Autism. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157759] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated emotional facial expression analysis (AEFEA) is used widely in applied research, including the development of screening/diagnostic systems for atypical human neurodevelopmental conditions. The validity of AEFEA systems has been systematically studied, but their test–retest reliability has not been researched thus far. We explored the test–retest reliability of a specific AEFEA software, Noldus FaceReader 8.0 (FR8; by Noldus Information Technology). We collected intensity estimates for 8 repeated emotions through FR8 from facial video recordings of 60 children: 31 typically developing children and 29 children with autism spectrum disorder. Test–retest reliability was imperfect in 20% of cases, affecting a substantial proportion of data points; however, the test–retest differences were small. This shows that the test–retest reliability of FR8 is high but not perfect. A proportion of cases which initially failed to show perfect test–retest reliability reached it in a subsequent analysis by FR8. This suggests that repeated analyses by FR8 can, in some cases, lead to the “stabilization” of emotion intensity datasets. Under ANOVA, the test–retest differences did not influence the pattern of cross-emotion and cross-group effects and interactions. Our study does not question the validity of previous results gained by AEFEA technology, but it shows that further exploration of the test–retest reliability of AEFEA systems is desirable.
Collapse
|
15
|
Boonipat T, Hebel N, Zhu A, Lin J, Shapiro D. Using Artificial Intelligence to Analyze Emotion and Facial Action Units Following Facial Rejuvenation Surgery. J Plast Reconstr Aesthet Surg 2022; 75:3628-3651. [DOI: 10.1016/j.bjps.2022.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 06/08/2022] [Accepted: 08/01/2022] [Indexed: 10/16/2022]
|
16
|
Galler M, Grendstad ÅR, Ares G, Varela P. Capturing food-elicited emotions: Facial decoding of children’s implicit and explicit responses to tasted samples. Food Qual Prefer 2022. [DOI: 10.1016/j.foodqual.2022.104551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
17
|
Rojas M, Ponce P, Molina A. Development of a Sensing Platform Based on Hands-Free Interfaces for Controlling Electronic Devices. Front Hum Neurosci 2022; 16:867377. [PMID: 35754778 PMCID: PMC9231433 DOI: 10.3389/fnhum.2022.867377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Hands-free interfaces are essential to people with limited mobility for interacting with biomedical or electronic devices. However, there are not enough sensing platforms that quickly tailor the interface to these users with disabilities. Thus, this article proposes to create a sensing platform that could be used by patients with mobility impairments to manipulate electronic devices, thereby their independence will be increased. Hence, a new sensing scheme is developed by using three hands-free signals as inputs: voice commands, head movements, and eye gestures. These signals are obtained by using non-invasive sensors: a microphone for the speech commands, an accelerometer to detect inertial head movements, and an infrared oculography to register eye gestures. These signals are processed and received as the user's commands by an output unit, which provides several communication ports for sending control signals to other devices. The interaction methods are intuitive and could extend boundaries for people with disabilities to manipulate local or remote digital systems. As a study case, two volunteers with severe disabilities used the sensing platform to steer a power wheelchair. Participants performed 15 common skills for wheelchair users and their capacities were evaluated according to a standard test. By using the head control they obtained 93.3 and 86.6%, respectively for volunteers A and B; meanwhile, by using the voice control they obtained 63.3 and 66.6%, respectively. These results show that the end-users achieved high performance by developing most of the skills by using the head movements interface. On the contrary, the users were not able to develop most of the skills by using voice control. These results showed valuable information for tailoring the sensing platform according to the end-user needs.
Collapse
Affiliation(s)
- Mario Rojas
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Pedro Ponce
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Arturo Molina
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| |
Collapse
|
18
|
Steinmair D, Löffler-Stastka H. Personalized treatment - which interaction ingredients should be focused to capture the unconscious. World J Clin Cases 2022; 10:2053-2062. [PMID: 35321177 PMCID: PMC8895185 DOI: 10.12998/wjcc.v10.i7.2053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 12/14/2021] [Accepted: 02/13/2022] [Indexed: 02/06/2023] Open
Abstract
A recent meta-analysis revealed that mental health and baseline psychological impairment affect the quality of life and outcomes in different chronic conditions. Implementing mental health care in physical care services is still insufficient. Thus, interdisciplinary communication across treatment providers is essential. The standardized language provided by the diagnostic statistical manual favors a clear conceptualization. However, this approach might not focus on the individual, as thinking in categories might impede recognizing the continuum from healthy to diseased. Psychoanalytic theory is concerned with an individual’s unconscious conflictual wishes and motivations, manifested through enactments like psychic symptoms or (maladaptive) behavior with long-term consequences if not considered. Such modifiable internal and external factors often are inadequately treated. However, together with the physical chronic condition constraints, these factors determine degrees of freedom for a self-determined existence. The effect of therapeutic interventions, and especially therapy adherence, relies on a solid therapeutic relationship. Outcome and process research still investigates the mechanism of change in psychotherapeutic treatments with psychanalysis’s focus on attachment problems. This article examines existing knowledge about the mechanism of change in psychoanalysis under the consideration of current trends emerging from psychotherapy research. A clinical example is discussed. Additionally, further directions for research are given. The theoretical frame in psychoanalytic therapies is the affect-cognitive interface. Subliminal affect-perception is enabled via awareness of subjective meanings in oneself and the other; shaping this awareness is the main intervention point. The interactional ingredients, the patient’s inherent bioenvironmental history meeting the clinician, are relevant variables. Several intrinsic, subliminal parameters relevant for changing behavior are observed. Therapeutic interventions aim at supporting the internalization of the superego’s functions and at making this ability available in moments of self-reflection. By supporting mentalization abilities, a better understanding of oneself and higher self-regulation (including emotional regulation) can lead to better judgments (application of formal logic and abstract thinking). Thus, this facilitates enduring behavior change with presumably positive effects on mental and physical health.
Collapse
Affiliation(s)
- Dagmar Steinmair
- Department of Psychoanalysis and Psychotherapy, Medical University Vienna, Wien 1090, Österreich, Austria
| | - Henriette Löffler-Stastka
- Department of Psychoanalysis and Psychotherapy, Medical University Vienna, Wien 1090, Österreich, Austria
| |
Collapse
|
19
|
Hossain MA, Assiri B. Facial expression recognition based on active region of interest using deep learning and parallelism. PeerJ Comput Sci 2022; 8:e894. [PMID: 35494822 PMCID: PMC9044208 DOI: 10.7717/peerj-cs.894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 01/26/2022] [Indexed: 06/14/2023]
Abstract
The automatic facial expression tracking method has become an emergent topic during the last few decades. It is a challenging problem that impacts many fields such as virtual reality, security surveillance, driver safety, homeland security, human-computer interaction, medical applications. A remarkable cost-efficiency can be achieved by considering some areas of a face. These areas are termed Active Regions of Interest (AROIs). This work proposes a facial expression recognition framework that investigates five types of facial expressions, namely neutral, happiness, fear, surprise, and disgust. Firstly, a pose estimation method is incorporated and to go along with an approach to rotate the face to achieve a normalized pose. Secondly, the whole face-image is segmented into four classes and eight regions. Thirdly, only four AROIs are identified from the segmented regions. The four AROIs are the nose-tip, right eye, left eye, and lips respectively. Fourthly, an info-image-data-mask database is maintained for classification and it is used to store records of images. This database is the mixture of all the images that are gained after introducing a ten-fold cross-validation technique using the Convolutional Neural Network. Correlations of variances and standard deviations are computed based on identified images. To minimize the required processing time in both training and testing the data set, a parallelism technique is introduced, in which each region of the AROIs is classified individually and all of them run in parallel. Fifthly, a decision-tree-level synthesis-based framework is proposed to coordinate the results of parallel classification, which helps to improve the recognition accuracy. Finally, experimentation on both independent and synthesis databases is voted for calculating the performance of the proposed technique. By incorporating the proposed synthesis method, we gain 94.499%, 95.439%, and 98.26% accuracy with the CK+ image sets and 92.463%, 93.318%, and 94.423% with the JAFFE image sets. The overall accuracy is 95.27% in recognition. We gain 2.8% higher accuracy by introducing a decision-level synthesis method. Moreover, with the incorporation of parallelism, processing time speeds up three times faster. This accuracy proves the robustness of the proposed scheme.
Collapse
Affiliation(s)
- Mohammad Alamgir Hossain
- Department of COMPUTER SCIENCE, College of Computer Science & Information Technology, Jazan University, Jazan, Kingdom of Saudi Arabia
| | - Basem Assiri
- Department of COMPUTER SCIENCE, College of Computer Science & Information Technology, Jazan University, Jazan, Kingdom of Saudi Arabia
| |
Collapse
|
20
|
Lewandowska A, Rejer I, Bortko K, Jankowski J. Eye-Tracker Study of Influence of Affective Disruptive Content on User's Visual Attention and Emotional State. SENSORS 2022; 22:s22020547. [PMID: 35062508 PMCID: PMC8780667 DOI: 10.3390/s22020547] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 12/24/2021] [Accepted: 01/08/2022] [Indexed: 12/19/2022]
Abstract
When reading interesting content or searching for information on a website, the appearance of a pop-up advertisement in the middle of the screen is perceived as irritating by a recipient. Interrupted cognitive processes are considered unwanted by the user but desired by advertising providers. Diverting visual attention away from the main content is intended to focus the user on the appeared disruptive content. Is the attempt to reach the user by any means justified? In this study, we examined the impact of pop-up emotional content on user reactions. For this purpose, a cognitive experiment was designed where a text-reading task was interrupted by two types of affective pictures: positive and negative ones. To measure the changes in user reactions, an eye-tracker (for analysis of eye movements and changes in gaze points) and an iMotion Platform (for analysis of face muscles' movements) were used. The results confirm the impact of the type of emotional content on users' reactions during cognitive process interruptions and indicate that the negative impact of cognitive process interruptions on the user can be reduced. The negative content evoked lower cognitive load, narrower visual attention, and lower irritation compared to positive content. These results offer insight on how to provide more efficient Internet advertising.
Collapse
|
21
|
Abstract
AbstractHuman emotion recognition is an active research area in artificial intelligence and has made substantial progress over the past few years. Many recent works mainly focus on facial regions to infer human affection, while the surrounding context information is not effectively utilized. In this paper, we proposed a new deep network to effectively recognize human emotions using a novel global-local attention mechanism. Our network is designed to extract features from both facial and context regions independently, then learn them together using the attention module. In this way, both the facial and contextual information is used to infer human emotions, therefore enhancing the discrimination of the classifier. The intensive experiments show that our method surpasses the current state-of-the-art methods on recent emotion datasets by a fair margin. Qualitatively, our global-local attention module can extract more meaningful attention maps than previous methods. The source code and trained model of our network are available at https://github.com/minhnhatvt/glamor-net.
Collapse
|
22
|
Wang Y, Zhang J, Lee H. An Online Experiment During COVID-19: Testing the Influences of Autonomy Support Toward Emotions and Academic Persistence. Front Psychol 2021; 12:747209. [PMID: 34707547 PMCID: PMC8542910 DOI: 10.3389/fpsyg.2021.747209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Accepted: 09/17/2021] [Indexed: 12/02/2022] Open
Abstract
Students’ academic persistence is a critical component of effective online learning. Promoting students’ academic persistence could potentially alleviate learning loss or drop-out, especially during challenging time like the COVID-19 pandemic. Previous research indicated that different emotions and autonomy support could all influence students’ academic persistence. However, few studies examined the multidimensionality of persistence using an experimental design with students’ real-time emotions. Using an experimental design and the Contain Intelligent Facial Expression Recognition System (CIFERS), this research explored the dynamic associations among real-time emotions (joy and anxiety), autonomy support (having choice and no choice), self-perceived persistence, self-reliance persistence, and help-seeking persistence. 177 college students participated in this study online via Zoom during COVID-19 university closure. The results revealed that having choice and high intensity of joy could promote students’ self-reliance persistence, but not help-seeking persistence. Interestingly, students who perceived themselves as more persistent experienced more joy during experiment. The theoretical and practical implications on facilitating students’ academic persistence were discussed.
Collapse
Affiliation(s)
- Yurou Wang
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, The University of Alabama, Tuscaloosa, AL, United States
| | - Jihong Zhang
- Department of Psychological and Quantitative Foundations, The University of Iowa, Iowa City, IA, United States
| | - Halim Lee
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, The University of Alabama, Tuscaloosa, AL, United States
| |
Collapse
|
23
|
Bremhorst A, Mills DS, Würbel H, Riemer S. Evaluating the accuracy of facial expressions as emotion indicators across contexts in dogs. Anim Cogn 2021; 25:121-136. [PMID: 34338869 PMCID: PMC8904359 DOI: 10.1007/s10071-021-01532-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 11/25/2022]
Abstract
Facial expressions potentially serve as indicators of animal emotions if they are consistently present across situations that (likely) elicit the same emotional state. In a previous study, we used the Dog Facial Action Coding System (DogFACS) to identify facial expressions in dogs associated with conditions presumably eliciting positive anticipation (expectation of a food reward) and frustration (prevention of access to the food). Our first aim here was to identify facial expressions of positive anticipation and frustration in dogs that are context-independent (and thus have potential as emotion indicators) and to distinguish them from expressions that are reward-specific (and thus might relate to a motivational state associated with the expected reward). Therefore, we tested a new sample of 28 dogs with a similar set-up designed to induce positive anticipation (positive condition) and frustration (negative condition) in two reward contexts: food and toys. The previous results were replicated: Ears adductor was associated with the positive condition and Ears flattener, Blink, Lips part, Jaw drop, and Nose lick with the negative condition. Four additional facial actions were also more common in the negative condition. All actions except the Upper lip raiser were independent of reward type. Our second aim was to assess basic measures of diagnostic accuracy for the potential emotion indicators. Ears flattener and Ears downward had relatively high sensitivity but low specificity, whereas the opposite was the case for the other negative correlates. Ears adductor had excellent specificity but low sensitivity. If the identified facial expressions were to be used individually as diagnostic indicators, none would allow consistent correct classifications of the associated emotion. Diagnostic accuracy measures are an essential feature for validity assessments of potential indicators of animal emotion.
Collapse
Affiliation(s)
- A Bremhorst
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland.
- School of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK.
- Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern, 3012, Bern, Switzerland.
| | - D S Mills
- School of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK
| | - H Würbel
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland
| | - S Riemer
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland
| |
Collapse
|
24
|
Clark EA, Duncan SE, Hamilton LM, Bell MA, Lahne J, Gallagher DL, O'Keefe SF. Characterizing consumer emotional response to milk packaging guides packaging material selection. Food Qual Prefer 2021. [DOI: 10.1016/j.foodqual.2020.103984] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|