1
|
Lalk C, Targan K, Steinbrenner T, Schaffrath J, Eberhardt S, Schwartz B, Vehlen A, Lutz W, Rubel J. Employing large language models for emotion detection in psychotherapy transcripts. Front Psychiatry 2025; 16:1504306. [PMID: 40417271 PMCID: PMC12098529 DOI: 10.3389/fpsyt.2025.1504306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 04/14/2025] [Indexed: 05/27/2025] Open
Abstract
Purpose In the context of psychotherapy, emotions play an important role both through their association with symptom severity, as well as their effects on the therapeutic relationship. In this analysis, we aim to train a large language model (LLM) for the detection of emotions in German speech. We want to apply this model on a corpus of psychotherapy transcripts to predict symptom severity and alliance aiming to identify the most important emotions for the prediction of symptom severity and therapeutic alliance. Methods We employed a public labeled dataset of 28 emotions and translated the dataset into German. A pre-trained LLM was then fine-tuned on this dataset for emotion classification. We applied the fine-tuned model to a dataset containing 553 psychotherapy sessions of 124 patients. Using machine learning (ML) and explainable artificial intelligence (AI), we predicted symptom severity and alliance by the detected emotions. Results Our fine-tuned model achieved modest classification performance (F1macro =0.45, Accuracy=0.41, Kappa=0.42) across the 28 emotions. Incorporating all emotions, our ML model showed satisfying performance for the prediction of symptom severity (r = .50; 95%-CI:.42,.57) and moderate performance for the prediction of alliance scores (r = .20; 95%-CI:.06,.32). The most important emotions for the prediction of symptom severity were approval, anger, and fear. The most important emotions for the prediction of alliance were curiosity, confusion, and surprise. Conclusions Even though the classification results were only moderate, our model achieved a good performance especially for prediction of symptom severity. The results confirm the role of negative emotions in the prediction of symptom severity, while they also highlight the role of positive emotions in fostering a good alliance. Future directions entail the improvement of the labeled dataset, especially with regards to domain-specificity and incorporating context information. Additionally, other modalities and Natural Language Processsing (NLP)-based alliance assessment could be integrated.
Collapse
Affiliation(s)
- Christopher Lalk
- Department of Psychology, Osnabrück University, Osnabrück, Germany
| | - Kim Targan
- Department of Psychology, Osnabrück University, Osnabrück, Germany
| | | | - Jana Schaffrath
- Department of Psychology, University of Trier, Trier, Germany
| | | | - Brian Schwartz
- Department of Psychology, University of Trier, Trier, Germany
| | - Antonia Vehlen
- Department of Psychology, University of Trier, Trier, Germany
| | - Wolfgang Lutz
- Department of Psychology, University of Trier, Trier, Germany
| | - Julian Rubel
- Department of Psychology, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
2
|
Maza A, Goizueta S, Dolores Navarro M, Noé E, Ferri J, Naranjo V, Llorens R. EEG-based responses of patients with disorders of consciousness and healthy controls to familiar and non-familiar emotional videos. Clin Neurophysiol 2024; 168:104-120. [PMID: 39486289 DOI: 10.1016/j.clinph.2024.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 09/27/2024] [Accepted: 10/22/2024] [Indexed: 11/04/2024]
Abstract
OBJECTIVE To investigate the differences in the brain responses of healthy controls (HC) and patients with disorders of consciousness (DOC) to familiar and non-familiar audiovisual stimuli and their consistency with the clinical progress. METHODS EEG responses of 19 HC and 19 patients with DOC were recorded while watching emotionally-valenced familiar and non-familiar videos. Differential entropy of the EEG recordings was used to train machine learning models aimed to distinguish brain responses to stimuli type. The consistency of brain responses with the clinical progress of the patients was also evaluated. RESULTS Models trained using data from HC outperformed those for patients. However, the performance of the models for patients was not influenced by their clinical condition. The models were successfully trained for over 75% of participants, regardless of their clinical condition. More than 75% of patients whose CRS-R scores increased post-study displayed distinguishable brain responses to both stimuli. CONCLUSIONS Responses to emotionally-valenced stimuli enabled modelling classifiers that were sensitive to the familiarity of the stimuli, regardless of the clinical condition of the participants and were consistent with their clinical progress in most cases. SIGNIFICANCE EEG responses are sensitive to familiarity of emotionally-valenced stimuli in HC and patients with DOC.
Collapse
Affiliation(s)
- Anny Maza
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - Sandra Goizueta
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - María Dolores Navarro
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Enrique Noé
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Joan Ferri
- IRENEA. Instituto de Rehabilitación Neurológica, Fundación Hospitales Vithas, València, Spain
| | - Valery Naranjo
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain
| | - Roberto Llorens
- Institute for Human-Centered Technology Research, Universitat Politècnica de València, Camino de Vera s/n, Valencia 46011, Spain.
| |
Collapse
|
3
|
Li L, Gui X, Huang G, Zhang L, Wan F, Han X, Wang J, Ni D, Liang Z, Zhang Z. Decoded EEG neurofeedback-guided cognitive reappraisal training for emotion regulation. Cogn Neurodyn 2024; 18:2659-2673. [PMID: 39555250 PMCID: PMC11564442 DOI: 10.1007/s11571-024-10108-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 11/19/2024] Open
Abstract
Neurofeedback, when combined with cognitive reappraisal, offers promising potential for emotion regulation training. However, prior studies have predominantly relied on functional magnetic resonance imaging, which could impede its clinical feasibility. Furthermore, these studies have primarily focused on reducing negative emotions while overlooking the importance of enhancing positive emotions. In our current study, we developed a novel electroencephalogram (EEG) neurofeedback-guided cognitive reappraisal training protocol for emotion regulation. We recruited forty-two healthy subjects (20 females; 22.4 ± 2.2 years old) who were randomly assigned to either the neurofeedback group or the control group. We evaluated the efficacy of this newly proposed neurofeedback training approach in regulating emotions evoked by pictures with different valence levels (low positive and high negative). Initially, we trained an EEG-based emotion decoding model for each individual using offline data. During the training process, we calculated the subjects' real-time self-regulation performance based on the decoded emotional states and fed it back to the subjects as feedback signals. Our results indicate that the proposed decoded EEG neurofeedback-guided cognitive reappraisal training protocol significantly enhanced emotion regulation performance for stimuli with low positive valence. Additionally, wavelet energy and differential entropy features in the high-frequency band played a crucial role in emotion classification and were associated with neural plasticity changes induced by emotion regulation. These findings validate the beneficial effects of the proposed EEG neurofeedback protocol and offer insights into the neural mechanisms underlying its training effects. This novel decoded neurofeedback training protocol presents a promising cost-effective and non-invasive treatment technique for emotion-related mental disorders.
Collapse
Affiliation(s)
- Linling Li
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Xueying Gui
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Gan Huang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Li Zhang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Feng Wan
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Xue Han
- Department of Mental Health, Shenzhen Nanshan Center for Chronic Disease Control, Shenzhen, 518060 China
| | - Jianhong Wang
- Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Shenzhen, 518060 China
| | - Dong Ni
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Zhen Liang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, 518060 China
- International Health Science Innovation Center, Medical School, Shenzhen University, Shenzhen, 518060 China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060 China
| | - Zhiguo Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, 518060 China
- Peng Cheng Laboratory, Shenzhen, 518060 China
| |
Collapse
|
4
|
Wang Y, Chen CB, Imamura T, Tapia IE, Somers VK, Zee PC, Lim DC. A novel methodology for emotion recognition through 62-lead EEG signals: multilevel heterogeneous recurrence analysis. Front Physiol 2024; 15:1425582. [PMID: 39119215 PMCID: PMC11306145 DOI: 10.3389/fphys.2024.1425582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 06/27/2024] [Indexed: 08/10/2024] Open
Abstract
Objective Recognizing emotions from electroencephalography (EEG) signals is a challenging task due to the complex, nonlinear, and nonstationary characteristics of brain activity. Traditional methods often fail to capture these subtle dynamics, while deep learning approaches lack explainability. In this research, we introduce a novel three-phase methodology integrating manifold embedding, multilevel heterogeneous recurrence analysis (MHRA), and ensemble learning to address these limitations in EEG-based emotion recognition. Approach The proposed methodology was evaluated using the SJTU-SEED IV database. We first applied uniform manifold approximation and projection (UMAP) for manifold embedding of the 62-lead EEG signals into a lower-dimensional space. We then developed MHRA to characterize the complex recurrence dynamics of brain activity across multiple transition levels. Finally, we employed tree-based ensemble learning methods to classify four emotions (neutral, sad, fear, happy) based on the extracted MHRA features. Main results Our approach achieved high performance, with an accuracy of 0.7885 and an AUC of 0.7552, outperforming existing methods on the same dataset. Additionally, our methodology provided the most consistent recognition performance across different emotions. Sensitivity analysis revealed specific MHRA metrics that were strongly associated with each emotion, offering valuable insights into the underlying neural dynamics. Significance This study presents a novel framework for EEG-based emotion recognition that effectively captures the complex nonlinear and nonstationary dynamics of brain activity while maintaining explainability. The proposed methodology offers significant potential for advancing our understanding of emotional processing and developing more reliable emotion recognition systems with broad applications in healthcare and beyond.
Collapse
Affiliation(s)
- Yujie Wang
- Department of Industrial and Systems Engineering, University of Miami, Coral Gables, FL, United States
| | - Cheng-Bang Chen
- Department of Industrial and Systems Engineering, University of Miami, Coral Gables, FL, United States
| | - Toshihiro Imamura
- Division of Sleep Medicine, Department of Medicine, University of Pennsylvania, Phialdelphia, PA, United States
- Division of Pulmonary and Sleep Medicine, Children’s Hospital of Philadelphia, Phialdelphia, PA, United States
| | - Ignacio E. Tapia
- Division of Pediatric Pulmonology, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Virend K. Somers
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, United States
| | - Phyllis C. Zee
- Center for Circadian and Sleep Medicine, Department of Neurology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Diane C. Lim
- Department of Medicine, Miami VA Medical Center, Miami, FL, United States
- Department of Medicine, Miller School of Medicine, University of Miami, Miami, FL, United States
| |
Collapse
|
5
|
Carbone F, Bondi E, Massalha Y, Anastasi A, Ferro A, Pizzolante M, Schiena G, Maddalena Bianchi AM, Gaggioli A, Mazzocut-Mis M, Chirico A, Brambilla P, Maggioni E. Exploring Brain Activity During Awe-Inducing Virtual Reality Experiences: a Multi-Metric EEG Frequency Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039568 DOI: 10.1109/embc53108.2024.10782046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
This scientific research represents a novel investigation into the neural underpinnings of the complex awe emotion, made possible with an innovative experimental setup that integrates nature-based Virtual Reality scenarios (nVRs) with the concurrent recording of electroencephalography (EEG) signals. The noninvasive EEG technique enables to capture brain electrical activity in real time and therefore holds great promise in elucidating the neural dynamics associated with complex emotional experiences. A group of 15 healthy volunteers participated in the study; EEG recordings were performed at baseline (closed-eyes resting-state without VR), and during the participants' navigation within four immersive nVRs, three designed to elicit the profound feeling of awe and one of reference. To unveil the neural underpinnings associated with awe experiences, linear and nonlinear frequency analyses, Power Spectral Density (PSD) and Power Spectral Entropy (PSE), were computed for each 2-second EEG signal epoch within four main EEG frequency bands. The Friedman test was applied to each channel to compare (i) awe-inducing vs. reference nVRs, highlighting the effect of awe, and (ii) reference nVRs vs. baseline condition, highlighting the effect of VR. The Friedman test results (p<0.01) showed that both PSD and PSE captured similar patterns within the comparison reference nVRs vs. baseline. In contrast, greater differences between the two methods were found in the awe-inducing vs. reference nVRs comparison, showing PSD and PSE changes that were specific to each awe-inducing nVRs. The VR-EEG experimental setup, combined with linear and nonlinear EEG analysis methodologies, enabled us to comprehensively investigate the frequency-specific brain activity underlying diverse awe-inducing experimental conditions. Our findings give valuable insights into the neural underpinnings of awe experiences, confirming the potential of immersive VR in emotional neuroscience.
Collapse
|
6
|
Ahmadzadeh Nobari Azar N, Cavus N, Esmaili P, Sekeroglu B, Aşır S. Detecting emotions through EEG signals based on modified convolutional fuzzy neural network. Sci Rep 2024; 14:10371. [PMID: 38710806 DOI: 10.1038/s41598-024-60977-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 04/29/2024] [Indexed: 05/08/2024] Open
Abstract
Emotion is a human sense that can influence an individual's life quality in both positive and negative ways. The ability to distinguish different types of emotion can lead researchers to estimate the current situation of patients or the probability of future disease. Recognizing emotions from images have problems concealing their feeling by modifying their facial expressions. This led researchers to consider Electroencephalography (EEG) signals for more accurate emotion detection. However, the complexity of EEG recordings and data analysis using conventional machine learning algorithms caused inconsistent emotion recognition. Therefore, utilizing hybrid deep learning models and other techniques has become common due to their ability to analyze complicated data and achieve higher performance by integrating diverse features of the models. However, researchers prioritize models with fewer parameters to achieve the highest average accuracy. This study improves the Convolutional Fuzzy Neural Network (CFNN) for emotion recognition using EEG signals to achieve a reliable detection system. Initially, the pre-processing and feature extraction phases are implemented to obtain noiseless and informative data. Then, the CFNN with modified architecture is trained to classify emotions. Several parametric and comparative experiments are performed. The proposed model achieved reliable performance for emotion recognition with an average accuracy of 98.21% and 98.08% for valence (pleasantness) and arousal (intensity), respectively, and outperformed state-of-the-art methods.
Collapse
Affiliation(s)
- Nasim Ahmadzadeh Nobari Azar
- Department of Biomedical Engineering, Near East University, 99138, Nicosia, Cyprus.
- Computer Information Systems Research and Technology Center, Near East University, Nicosia, 99138, Turkey.
| | - Nadire Cavus
- Department of Computer Information Systems, Near East University, 99138, Nicosia, Cyprus
- Computer Information Systems Research and Technology Center, Near East University, Nicosia, 99138, Turkey
| | - Parvaneh Esmaili
- Department of Computer Engineering, Cyprus International University, 99258, Nicosia, Cyprus
| | - Boran Sekeroglu
- Software Engineering Department, World Peace University, Nicosia, Turkey
| | - Süleyman Aşır
- Department of Biomedical Engineering, Near East University, 99138, Nicosia, Cyprus
- Center for Science and Technology and Engineering, Near East University, Nicosia, 99138, Turkey
| |
Collapse
|
7
|
Kawaguchi T, Ono K, Hikawa H. Electroencephalogram-Based Facial Gesture Recognition Using Self-Organizing Map. SENSORS (BASEL, SWITZERLAND) 2024; 24:2741. [PMID: 38732846 PMCID: PMC11085705 DOI: 10.3390/s24092741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/18/2024] [Accepted: 04/23/2024] [Indexed: 05/13/2024]
Abstract
Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, β, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.
Collapse
Affiliation(s)
| | | | - Hiroomi Hikawa
- Faculty of Engineering Science, Kansai University, Osaka 564-8680, Japan
| |
Collapse
|
8
|
Roso A, Aubert A, Cambos S, Vial F, Schäfer J, Belin M, Gabriel D, Bize C. Contribution of cosmetic ingredients and skin care textures to emotions. Int J Cosmet Sci 2024; 46:262-283. [PMID: 37914390 DOI: 10.1111/ics.12928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 10/06/2023] [Accepted: 10/24/2023] [Indexed: 11/03/2023]
Abstract
OBJECTIVE Emotions play an important role in consumers' perception of a sensory experience. The objective of this work was to investigate the ability of basic skin care formulas (i.e. without interference of odour, colour and packaging) and pillar ingredients (i.e. emollients and rheology modifiers) to elicit emotions. Another objective was to track, as claimed by neurocosmetics, the possible effect of formulas to trigger emotions from their direct biochemical effects on the skin. METHODS Standard methodologies were mobilized, combining subjective and behavioural parameters (i.e. verbatim, prosody and gesture). Sense and Story methodology based on a collection of metaphoric verbatim was conducted after an induction phase. In addition, an experimental electrophysiological real-time visualization method was tried as a first experience in cosmetics. Finally, the ability of formulations with emotional benefits to modulate the release of neuropeptides by sensory neurons was evaluated on a 3D human model (epidermis co-cultured with sensory neurons). RESULTS Skin care formulas were shown to play a role in emotional potential and the types of emotion generated, while changing one ingredient mostly acted on the intensity of the emotions. Verbatim provided contrasted answers depending on the protocol, highlighting the interest of non-verbal approaches to detect subtle effects. The in vitro model substantiated physiological effects of skin care formulas with emotional potential on human skin sensory neuron activity. CONCLUSION Emotions were impacted by the change in ingredients and were better captured through non-verbal methods.
Collapse
Affiliation(s)
- Alicia Roso
- Seppic Research & Innovation, Castres, France
| | - Arnaud Aubert
- University of Tours, Tours, France
- Emospin, Tours, France
| | | | - Francis Vial
- Emospin, Tours, France
- Spincontrol, Tours, France
| | | | | | - Damien Gabriel
- INSERM CIC-1431, Centre d'Investigation Clinique, Besançon, France
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive (UR LINC), Université Franche-Comté, Besançon, France
- Plateforme de neuroimagerie et neuromodulation Neuraxess, CHU Besançon/Université Franche-Comté, Besançon, France
| | - Cécile Bize
- Seppic Research & Innovation, Castres, France
| |
Collapse
|
9
|
Zhao Q, Ye Z, Deng Y, Chen J, Chen J, Liu D, Ye X, Huan C. An advance in novel intelligent sensory technologies: From an implicit-tracking perspective of food perception. Compr Rev Food Sci Food Saf 2024; 23:e13327. [PMID: 38517017 DOI: 10.1111/1541-4337.13327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/19/2024] [Accepted: 03/01/2024] [Indexed: 03/23/2024]
Abstract
Food sensory evaluation mainly includes explicit and implicit measurement methods. Implicit measures of consumer perception are gaining significant attention in food sensory and consumer science as they provide effective, subconscious, objective analysis. A wide range of advanced technologies are now available for analyzing physiological and psychological responses, including facial analysis technology, neuroimaging technology, autonomic nervous system technology, and behavioral pattern measurement. However, researchers in the food field often lack systematic knowledge of these multidisciplinary technologies and struggle with interpreting their results. In order to bridge this gap, this review systematically describes the principles and highlights the applications in food sensory and consumer science of facial analysis technologies such as eye tracking, facial electromyography, and automatic facial expression analysis, as well as neuroimaging technologies like electroencephalography, magnetoencephalography, functional magnetic resonance imaging, and functional near-infrared spectroscopy. Furthermore, we critically compare and discuss these advanced implicit techniques in the context of food sensory research and then accordingly propose prospects. Ultimately, we conclude that implicit measures should be complemented by traditional explicit measures to capture responses beyond preference. Facial analysis technologies offer a more objective reflection of sensory perception and attitudes toward food, whereas neuroimaging techniques provide valuable insight into the implicit physiological responses during food consumption. To enhance the interpretability and generalizability of implicit measurement results, further sensory studies are needed. Looking ahead, the combination of different methodological techniques in real-life situations holds promise for consumer sensory science in the field of food research.
Collapse
Affiliation(s)
- Qian Zhao
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Zhiyue Ye
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Yong Deng
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Jin Chen
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
| | - Jianle Chen
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Donghong Liu
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Xingqian Ye
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Cheng Huan
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| |
Collapse
|
10
|
Xie S, Lei L, Sun J, Xu J. [Research on emotion recognition method based on IWOA-ELM algorithm for electroencephalogram]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:1-8. [PMID: 38403598 PMCID: PMC10894732 DOI: 10.7507/1001-5515.202303010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Emotion is a crucial physiological attribute in humans, and emotion recognition technology can significantly assist individuals in self-awareness. Addressing the challenge of significant differences in electroencephalogram (EEG) signals among different subjects, we introduce a novel mechanism in the traditional whale optimization algorithm (WOA) to expedite the optimization and convergence of the algorithm. Furthermore, the improved whale optimization algorithm (IWOA) was applied to search for the optimal training solution in the extreme learning machine (ELM) model, encompassing the best feature set, training parameters, and EEG channels. By testing 24 common EEG emotion features, we concluded that optimal EEG emotion features exhibited a certain level of specificity while also demonstrating some commonality among subjects. The proposed method achieved an average recognition accuracy of 92.19% in EEG emotion recognition, significantly reducing the manual tuning workload and offering higher accuracy with shorter training times compared to the control method. It outperformed existing methods, providing a superior performance and introducing a novel perspective for decoding EEG signals, thereby contributing to the field of emotion research from EEG signal.
Collapse
Affiliation(s)
- Songyun Xie
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, P. R. China
| | - Lingjun Lei
- Medical Research Institute, Northwestern Polytechnical University, Xi'an 710129, P. R. China
| | - Jiang Sun
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, P. R. China
| | - Jian Xu
- School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710129, P. R. China
| |
Collapse
|
11
|
Obukhov A, Krasnyanskiy M, Volkov A, Nazarova A, Teselkin D, Patutin K, Zajceva D. Method for Assessing the Influence of Phobic Stimuli in Virtual Simulators. J Imaging 2023; 9:195. [PMID: 37888302 PMCID: PMC10607658 DOI: 10.3390/jimaging9100195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/17/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023] Open
Abstract
In the organizing of professional training, the assessment of the trainee's reaction and state in stressful situations is of great importance. Phobic reactions are a specific type of stress reaction that, however, is rarely taken into account when developing virtual simulators, and are a risk factor in the workplace. A method for evaluating the impact of various phobic stimuli on the quality of training is considered, which takes into account the time, accuracy, and speed of performing professional tasks, as well as the characteristics of electroencephalograms (the amplitude, power, coherence, Hurst exponent, and degree of interhemispheric asymmetry). To evaluate the impact of phobias during experimental research, participants in the experimental group performed exercises in different environments: under normal conditions and under the influence of acrophobic and arachnophobic stimuli. The participants were divided into subgroups using clustering algorithms and an expert neurologist. After that, a comparison of the subgroup metrics was carried out. The research conducted makes it possible to partially confirm our hypotheses about the negative impact of phobic effects on some participants in the experimental group. The relationship between the reaction to a phobia and the characteristics of brain activity was revealed, and the characteristics of the electroencephalogram signal were considered as the metrics for detecting a phobic reaction.
Collapse
Affiliation(s)
- Artem Obukhov
- The Laboratory of Medical VR Simulator Systems for Training, Diagnostics and Rehabilitation, Tambov State Technical University, Tambov 392000, Russia; (M.K.); (A.V.); (A.N.); (D.T.); (K.P.); (D.Z.)
| | | | | | | | | | | | | |
Collapse
|
12
|
Mazzacane S, Coccagna M, Manzella F, Pagliarini G, Sironi VA, Gatti A, Caselli E, Sciavicco G. Towards an objective theory of subjective liking: A first step in understanding the sense of beauty. PLoS One 2023; 18:e0287513. [PMID: 37352316 PMCID: PMC10289447 DOI: 10.1371/journal.pone.0287513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/07/2023] [Indexed: 06/25/2023] Open
Abstract
The study of the electroencephalogram signals recorded from subjects during an experience is a way to understand the brain processes that underlie their physical and emotional involvement. Such signals have the form of time series, and their analysis could benefit from applying techniques that are specific to this kind of data. Neuroaesthetics, as defined by Zeki in 1999, is the scientific approach to the study of aesthetic perceptions of art, music, or any other experience that can give rise to aesthetic judgments, such as liking or disliking a painting. Starting from a proprietary dataset of 248 trials from 16 subjects exposed to art paintings, using a real ecological context, this paper analyses the application of a novel symbolic machine learning technique, specifically designed to extract information from unstructured data and to express it in form of logical rules. Our purpose is to extract qualitative and quantitative logical rules, to relate the voltage at specific frequencies and in specific electrodes, and that, within the limits of the experiment, may help to understand the brain process that drives liking or disliking experiences in human subjects.
Collapse
Affiliation(s)
- S. Mazzacane
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - M. Coccagna
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - F. Manzella
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| | - G. Pagliarini
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| | - V. A. Sironi
- CESPEB Research Center, Neuroaesthetic Laboratory, University Bicocca, Milan, Italy
| | - A. Gatti
- Dept. of Humanistic Studies, University of Ferrara, Ferrara, Italy
| | - E. Caselli
- CIAS Interdepartmental Research Center (Dept. of Architecture, Dept. of Chemical, Pharmaceutical and Agricultural Sciences), University of Ferrara, Ferrara, Italy
| | - G. Sciavicco
- Dept. of Mathematics and Computer Science, University of Ferrara, Ferrara, Italy
| |
Collapse
|
13
|
Zong J, Xiong X, Zhou J, Ji Y, Zhou D, Zhang Q. FCAN-XGBoost: A Novel Hybrid Model for EEG Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5680. [PMID: 37420845 DOI: 10.3390/s23125680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/03/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN-XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN-XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Collapse
Affiliation(s)
- Jing Zong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Xin Xiong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Ying Ji
- Graduate School, Kunming Medical University, Kunming 650500, China
| | - Diao Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Qi Zhang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
14
|
Moontaha S, Schumann FEF, Arnrich B. Online Learning for Wearable EEG-Based Emotion Classification. SENSORS (BASEL, SWITZERLAND) 2023; 23:2387. [PMID: 36904590 PMCID: PMC10007607 DOI: 10.3390/s23052387] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 02/09/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification.
Collapse
|
15
|
Bouazizi S, benmohamed E, Ltifi H. Decision-making based on an improved visual analytics approach for emotion prediction. INTELLIGENT DECISION TECHNOLOGIES 2023. [DOI: 10.3233/idt-220263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Visual Analytics approach allows driving informed and effective decision-making. It assists decision-makers to visually interact with large amount of data and to computationally learn valuable hidden patterns in that data, which improve the decision quality. In this article, we introduce an enhanced visual analytics model combining cognitive-based visual analysis to data mining-based automatic analysis. As emotions are strongly related to human behaviour and society, emotion prediction is widely considered by decision making activities. Unlike speech and facial expressions modalities, EEG (electroencephalogram) has the advantage of being able to record information about the internal emotional state that is not always translated by perceptible external manifestations. For this reason, we applied the proposed cognitive approach on EEG data to demonstrate its efficiency for predicting emotional reaction to films. For automatic analysis, we developed the Echo State Network (ESN) technique considered as an efficient machine learning solution due to its straightforward training procedure and high modelling ability for handling time-series problems. Finally, utility and usability tests were performed to evaluate the developed prototype.
Collapse
Affiliation(s)
- Samar Bouazizi
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
- Computer Sciences and Mathematics Department, Faculty of sciences and technology of Sidi Bouzid, University of Kairouan, Kairouan, Tunisia
| | - Emna benmohamed
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
| | - Hela Ltifi
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
- Computer Sciences and Mathematics Department, Faculty of sciences and technology of Sidi Bouzid, University of Kairouan, Kairouan, Tunisia
| |
Collapse
|
16
|
Yuvaraj R, Thagavel P, Thomas J, Fogarty J, Ali F. Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings. SENSORS (BASEL, SWITZERLAND) 2023; 23:915. [PMID: 36679710 PMCID: PMC9867328 DOI: 10.3390/s23020915] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/07/2023] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Abstract
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time.
Collapse
Affiliation(s)
- Rajamanickam Yuvaraj
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| | - Prasanth Thagavel
- Interdisciplinary Graduate School, Nanyang Technological University, Singapore 639798, Singapore
| | - John Thomas
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Jack Fogarty
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| | - Farhan Ali
- National Institute of Education, Nanyang Technological University, Singapore 637616, Singapore
| |
Collapse
|
17
|
Zhong MY, Yang QY, Liu Y, Zhen BY, Zhao FD, Xie BB. EEG emotion recognition based on TQWT-features and hybrid convolutional recurrent neural network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
18
|
Wang B, Kang Y, Huo D, Feng G, Zhang J, Li J. EEG diagnosis of depression based on multi-channel data fusion and clipping augmentation and convolutional neural network. Front Physiol 2022; 13:1029298. [PMID: 36338469 PMCID: PMC9632488 DOI: 10.3389/fphys.2022.1029298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 09/23/2022] [Indexed: 09/29/2023] Open
Abstract
Depression is an undetectable mental disease. Most of the patients with depressive symptoms do not know that they are suffering from depression. Since the novel Coronavirus pandemic 2019, the number of patients with depression has increased rapidly. There are two kinds of traditional depression diagnosis. One is that professional psychiatrists make diagnosis results for patients, but it is not conducive to large-scale depression detection. Another is to use electroencephalography (EEG) to record neuronal activity. Then, the features of the EEG are extracted using manual or traditional machine learning methods to diagnose the state and type of depression. Although this method achieves good results, it does not fully utilize the multi-channel information of EEG. Aiming at this problem, an EEG diagnosis method for depression based on multi-channel data fusion cropping enhancement and convolutional neural network is proposed. First, the multi-channel EEG data are transformed into 2D images after multi-channel fusion (MCF) and multi-scale clipping (MSC) augmentation. Second, it is trained by a multi-channel convolutional neural network (MCNN). Finally, the trained model is loaded into the detection device to classify the input EEG signals. The experimental results show that the combination of MCF and MSC can make full use of the information contained in the single sensor records, and significantly improve the classification accuracy and clustering effect of depression diagnosis. The method has the advantages of low complexity and good robustness in signal processing and feature extraction, which is beneficial to the wide application of detection systems.
Collapse
Affiliation(s)
- Baiyang Wang
- School of Information Science and Engineering, Linyi University, Linyi, China
| | - Yuyun Kang
- School of Logistics, Linyi University, Linyi, China
| | - Dongyue Huo
- School of Information Science and Engineering, Linyi University, Linyi, China
| | - Guifang Feng
- School of Life Science, Linyi University, Linyi, China
- International College, Philippine Christian University, Manila, Philippines
| | - Jiawei Zhang
- Linyi Trade Logistics Science and Technology Industry Research Institute, Linyi, China
| | - Jiadong Li
- School of Logistics, Linyi University, Linyi, China
| |
Collapse
|
19
|
Tsentidou G, Moraitou D, Tsolaki M. Emotion Recognition in a Health Continuum: Comparison of Healthy Adults of Advancing Age, Community Dwelling Adults Bearing Vascular Risk Factors and People Diagnosed with Mild Cognitive Impairment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13366. [PMID: 36293946 PMCID: PMC9602834 DOI: 10.3390/ijerph192013366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 10/11/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
The identification of basic emotions plays an important role in social relationships and behaviors linked to survival. In neurodegenerative conditions such as Alzheimer's disease (AD), the ability to recognize emotions may already be impaired at early stages of the disease, such as the stage of Mild Cognitive Impairment (MCI). However, as regards vascular pathologies related to cognitive impairment, very little is known about emotion recognition in people bearing vascular risk factors (VRF). Therefore, the aim of the present study was to examine emotion recognition ability in the health continuum "healthy advancing age-advancing age with VRF-MCI". The sample consisted of 106 adults divided in three diagnostic groups; 43 adults with MCI, 41 adults bearing one or more VRF, and 22 healthy controls of advancing age (HC). Since HC were more educated and younger than the other two groups, the age-group and level of educational were taken into account in the statistical analyses. A dynamic visual test was administered to examine recognition of basic emotions and emotionally neutral conditions. The results showed only a significant diagnostic group x educational level interaction as regards total emotion recognition ability, F (4, 28.910) = 4.117 p = 0.004 η2 = 0.166. High educational level seems to contribute to a high-level-emotion-recognition-performance both in healthy adults of advancing age and in adults bearing vascular risk factors. Medium educational level appears to play the same role only in healthy adults. Neither educational level can help MCI people to enhance their significantly lower emotion recognition ability.
Collapse
Affiliation(s)
- Glykeria Tsentidou
- Laboratory of Psychology, Department of Experimental and Cognitive Psychology, School of Psychology, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation (CIRI), Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Despina Moraitou
- Laboratory of Psychology, Department of Experimental and Cognitive Psychology, School of Psychology, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation (CIRI), Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD), 54643 Thessaloniki, Greece
| | - Magdalini Tsolaki
- Laboratory of Neurodegenerative Diseases, Center for Interdisciplinary Research and Innovation (CIRI), Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD), 54643 Thessaloniki, Greece
| |
Collapse
|
20
|
Machine Learning Models for Classification of Human Emotions Using Multivariate Brain Signals. COMPUTERS 2022. [DOI: 10.3390/computers11100152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Humans can portray different expressions contrary to their emotional state of mind. Therefore, it is difficult to judge humans’ real emotional state simply by judging their physical appearance. Although researchers are working on facial expressions analysis, voice recognition, and gesture recognition; the accuracy levels of such analysis are much less and the results are not reliable. Hence, it becomes vital to have realistic emotion detector. Electroencephalogram (EEG) signals remain neutral to the external appearance and behavior of the human and help in ensuring accurate analysis of the state of mind. The EEG signals from various electrodes in different scalp regions are studied for performance. Hence, EEG has gained attention over time to obtain accurate results for the classification of emotional states in human beings for human–machine interaction as well as to design a program where an individual could perform a self-analysis of his emotional state. In the proposed scheme, we extract power spectral densities of multivariate EEG signals from different sections of the brain. From the extracted power spectral density (PSD), the features which provide a better feature for classification are selected and classified using long short-term memory (LSTM) and bi-directional long short-term memory (Bi-LSTM). The 2-D emotion model considered for the classification of frontal, parietal, temporal, and occipital is studied. The region-based classification is performed by considering positive and negative emotions. The performance accuracy of our previous model’s results of artificial neural network (ANN), support vector machine (SVM), K-nearest neighbor (K-NN), and LSTM was compared and 94.95% accuracy was received using Bi-LSTM considering four prefrontal electrodes.
Collapse
|
21
|
Jin Z, Jin Y, Chen Z. Empirical mode decomposition using deep learning model for financial market forecasting. PeerJ Comput Sci 2022; 8:e1076. [PMID: 36262133 PMCID: PMC9575866 DOI: 10.7717/peerj-cs.1076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 08/08/2022] [Indexed: 06/16/2023]
Abstract
Financial market forecasting is an essential component of financial systems; however, predicting financial market trends is a challenging job due to noisy and non-stationary information. Deep learning is renowned for bringing out excellent abstract features from the huge volume of raw data without depending on prior knowledge, which is potentially fascinating in forecasting financial transactions. This article aims to propose a deep learning model that autonomously mines the statistical rules of data and guides the financial market transactions based on empirical mode decomposition (EMD) with back-propagation neural networks (BPNN). Through the characteristic time scale of data, the intrinsic wave pattern was obtained and then decomposed. Financial market transaction data were analyzed, optimized using PSO, and predicted. Combining the nonlinear and non-stationary financial time series can improve prediction accuracy. The predictive model of deep learning, based on the analysis of the massive financial trading data, can forecast the future trend of financial market price, forming a trading signal when particular confidence is satisfied. The empirical results show that the EMD-based deep learning model has an excellent predicting performance.
Collapse
Affiliation(s)
- Zebin Jin
- College of Management, Ocean University of China, Qingdao, Shandong, China
| | - Yixiao Jin
- Shanghai Yingcai Information Technology Ltd., Fengxian, Shanghai, China
| | | |
Collapse
|
22
|
Kim S, Kim TS, Lee WH. Accelerating 3D Convolutional Neural Network with Channel Bottleneck Module for EEG-Based Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22186813. [PMID: 36146160 PMCID: PMC9500982 DOI: 10.3390/s22186813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/28/2022] [Accepted: 09/06/2022] [Indexed: 05/07/2023]
Abstract
Deep learning-based emotion recognition using EEG has received increasing attention in recent years. The existing studies on emotion recognition show great variability in their employed methods including the choice of deep learning approaches and the type of input features. Although deep learning models for EEG-based emotion recognition can deliver superior accuracy, it comes at the cost of high computational complexity. Here, we propose a novel 3D convolutional neural network with a channel bottleneck module (CNN-BN) model for EEG-based emotion recognition, with the aim of accelerating the CNN computation without a significant loss in classification accuracy. To this end, we constructed a 3D spatiotemporal representation of EEG signals as the input of our proposed model. Our CNN-BN model extracts spatiotemporal EEG features, which effectively utilize the spatial and temporal information in EEG. We evaluated the performance of the CNN-BN model in the valence and arousal classification tasks. Our proposed CNN-BN model achieved an average accuracy of 99.1% and 99.5% for valence and arousal, respectively, on the DEAP dataset, while significantly reducing the number of parameters by 93.08% and FLOPs by 94.94%. The CNN-BN model with fewer parameters based on 3D EEG spatiotemporal representation outperforms the state-of-the-art models. Our proposed CNN-BN model with a better parameter efficiency has excellent potential for accelerating CNN-based emotion recognition without losing classification performance.
Collapse
Affiliation(s)
- Sungkyu Kim
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, Kyung Hee University, Yongin 17104, Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Korea
- Correspondence: ; Tel.: +82-31-201-3750
| |
Collapse
|
23
|
A Preliminary Investigation on Frequency Dependant Cues for Human Emotions. ACOUSTICS 2022. [DOI: 10.3390/acoustics4020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The recent advances in Human-Computer Interaction and Artificial Intelligence have significantly increased the importance of identifying human emotions from different sensory cues. Hence, understanding the underlying relationships between emotions and sensory cues have become a subject of study in many fields including Acoustics, Psychology, Psychiatry, Neuroscience and Biochemistry. This work is a preliminary step towards investigating cues for human emotion on a fundamental level by aiming to establish relationships between tonal frequencies of sound and emotions. For that, an online perception test is conducted, in which participants are asked to rate the perceived emotions corresponding to each tone. The results show that a crossover point for four primary emotions lies in the frequency range of 417–440 Hz, thus consolidating the hypothesis that the frequency range of 432–440 Hz is neutral from human emotion perspective. It is also observed that the frequency dependant relationships between emotion pairs Happy—Sad, and Anger—Calm are approximately mirrored symmetric in nature.
Collapse
|
24
|
Towards Knowledge-Based Tourism Chinese Question Answering System. MATHEMATICS 2022. [DOI: 10.3390/math10040664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
With the rapid development of the tourism industry, various travel websites are emerging. The tourism question answering system explores a large amount of information from these travel websites to answer tourism questions, which is critical for providing a competitive travel experience. In this paper, we propose a framework that automatically constructs a tourism knowledge graph from a series of travel websites with regard to tourist attractions in Zhejiang province, China. Backed by this domain-specific knowledge base, we developed a tourism question answering system that also incorporates the underlying knowledge from a large-scale language model such as BERT. Experiments on real-world datasets demonstrate that the proposed method outperforms the baseline on various metrics. We also show the effectiveness of each of the question answering components in detail, including the query intent recognition and the answer generation.
Collapse
|