1
|
Goshvarpour A, Goshvarpour A. Lemniscate of Bernoulli's map quantifiers: innovative measures for EEG emotion recognition. Cogn Neurodyn 2024; 18:1061-1077. [PMID: 38826652 PMCID: PMC11143135 DOI: 10.1007/s11571-023-09968-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 03/18/2023] [Accepted: 04/05/2023] [Indexed: 06/04/2024] Open
Abstract
Thanks to the advent of affective computing, designing an automatic human emotion recognition system for clinical and non-clinical applications has attracted the attention of many researchers. Currently, multi-channel electroencephalogram (EEG)-based emotion recognition is a fundamental but challenging issue. This experiment envisioned developing a new scheme for automated EEG affect recognition. An innovative nonlinear feature engineering approach was presented based on Lemniscate of Bernoulli's Map (LBM), which belongs to the family of chaotic maps, in line with the EEG's nonlinear nature. As far as the authors know, LBM has not been utilized for biological signal analysis. Next, the map was characterized using several graphical indices. The feature vector was imposed on the feature selection algorithm while evaluating the role of the feature vector dimension on emotion recognition rates. Finally, the efficiency of the features on emotion recognition was appraised using two conventional classifiers and validated using the Database for Emotion Analysis using Physiological signals (DEAP) and SJTU Emotion EEG Dataset-IV (SEED-IV) benchmark databases. The experimental results showed a maximum accuracy of 92.16% for DEAP and 90.7% for SEED-IV. Achieving higher recognition rates compared to the state-of-art EEG emotion recognition systems suggest the proposed method based on LBM could have potential both in characterizing bio-signal dynamics and detecting affect-deficit disorders.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Mashhad, Razavi Khorasan Iran
- Health Technology Research Center, Imam Reza International University, Mashhad, Razavi Khorasan Iran
| |
Collapse
|
2
|
Thunström AO, Carlsen HK, Ali L, Larson T, Hellström A, Steingrimsson S. Usability Comparison Among Healthy Participants of an Anthropomorphic Digital Human and a Text-Based Chatbot as a Responder to Questions on Mental Health: Randomized Controlled Trial. JMIR Hum Factors 2024; 11:e54581. [PMID: 38683664 DOI: 10.2196/54581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/27/2024] [Accepted: 02/18/2024] [Indexed: 05/01/2024] Open
Abstract
BACKGROUND The use of chatbots in mental health support has increased exponentially in recent years, with studies showing that they may be effective in treating mental health problems. More recently, the use of visual avatars called digital humans has been introduced. Digital humans have the capability to use facial expressions as another dimension in human-computer interactions. It is important to study the difference in emotional response and usability preferences between text-based chatbots and digital humans for interacting with mental health services. OBJECTIVE This study aims to explore to what extent a digital human interface and a text-only chatbot interface differed in usability when tested by healthy participants, using BETSY (Behavior, Emotion, Therapy System, and You) which uses 2 distinct interfaces: a digital human with anthropomorphic features and a text-only user interface. We also set out to explore how chatbot-generated conversations on mental health (specific to each interface) affected self-reported feelings and biometrics. METHODS We explored to what extent a digital human with anthropomorphic features differed from a traditional text-only chatbot regarding perception of usability through the System Usability Scale, emotional reactions through electroencephalography, and feelings of closeness. Healthy participants (n=45) were randomized to 2 groups that used a digital human with anthropomorphic features (n=25) or a text-only chatbot with no such features (n=20). The groups were compared by linear regression analysis and t tests. RESULTS No differences were observed between the text-only and digital human groups regarding demographic features. The mean System Usability Scale score was 75.34 (SD 10.01; range 57-90) for the text-only chatbot versus 64.80 (SD 14.14; range 40-90) for the digital human interface. Both groups scored their respective chatbot interfaces as average or above average in usability. Women were more likely to report feeling annoyed by BETSY. CONCLUSIONS The text-only chatbot was perceived as significantly more user-friendly than the digital human, although there were no significant differences in electroencephalography measurements. Male participants exhibited lower levels of annoyance with both interfaces, contrary to previously reported findings.
Collapse
Affiliation(s)
- Almira Osmanovic Thunström
- Region Västra Götaland, Psychiatric Department, Sahlgrenska University Hospital, Gothenburg, Sweden
- Section of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Hanne Krage Carlsen
- Section of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Centre of Registers, Gothenburg, Sweden
| | - Lilas Ali
- Region Västra Götaland, Psychiatric Department, Sahlgrenska University Hospital, Gothenburg, Sweden
- Institute of Health Care Sciences, Centre for Person-Centred Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Centre for Person-Centred Care, University of Gothenburg, Gothenburg, Sweden
| | - Tomas Larson
- Region Västra Götaland, Psychiatric Department, Sahlgrenska University Hospital, Gothenburg, Sweden
- Section of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Andreas Hellström
- Department of Technology Management and Economics, Chalmers University of Technology, Gothenburg, Sweden
| | - Steinn Steingrimsson
- Region Västra Götaland, Psychiatric Department, Sahlgrenska University Hospital, Gothenburg, Sweden
- Section of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
3
|
Ju X, Li M, Tian W, Hu D. EEG-based emotion recognition using a temporal-difference minimizing neural network. Cogn Neurodyn 2024; 18:405-416. [PMID: 38699602 PMCID: PMC11061074 DOI: 10.1007/s11571-023-10004-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 07/25/2023] [Accepted: 08/21/2023] [Indexed: 05/05/2024] Open
Abstract
Electroencephalogram (EEG) emotion recognition plays an important role in human-computer interaction. An increasing number of algorithms for emotion recognition have been proposed recently. However, it is still challenging to make efficient use of emotional activity knowledge. In this paper, based on prior knowledge that emotion varies slowly across time, we propose a temporal-difference minimizing neural network (TDMNN) for EEG emotion recognition. We use maximum mean discrepancy (MMD) technology to evaluate the difference in EEG features across time and minimize the difference by a multibranch convolutional recurrent network. State-of-the-art performances are achieved using the proposed method on the SEED, SEED-IV, DEAP and DREAMER datasets, demonstrating the effectiveness of including prior knowledge in EEG emotion recognition.
Collapse
Affiliation(s)
- Xiangyu Ju
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China
| | - Ming Li
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China
| | - Wenli Tian
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China
| | - Dewen Hu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China
| |
Collapse
|
4
|
Chai X, Cao T, He Q, Wang N, Zhang X, Shan X, Lv Z, Tu W, Yang Y, Zhao J. Brain-computer interface digital prescription for neurological disorders. CNS Neurosci Ther 2024; 30:e14615. [PMID: 38358054 PMCID: PMC10867871 DOI: 10.1111/cns.14615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 12/13/2023] [Accepted: 01/09/2024] [Indexed: 02/16/2024] Open
Abstract
Neurological and psychiatric diseases can lead to motor, language, emotional disorder, and cognitive, hearing or visual impairment By decoding the intention of the brain in real time, the Brain-computer interface (BCI) can first assist in the diagnosis of diseases, and can also compensate for its damaged function by directly interacting with the environment; In addition, provide output signals in various forms, such as actual motion, tactile or visual feedback, to assist in rehabilitation training; Further intervention in brain disorders is achieved by close-looped neural modulation. In this article, we envision the future BCI digital prescription system for patients with different functional disorders and discuss the key contents in the prescription the brain signals, coding and decoding protocols and interaction paradigms, and assistive technology. Then, we discuss the details that need to be specially included in the digital prescription for different intervention technologies. The third part summarizes previous examples of intervention, focusing on how to select appropriate interaction paradigms for patients with different functional impairments. For the last part, we discussed the indicators and influencing factors in evaluating the therapeutic effect of BCI as intervention.
Collapse
Affiliation(s)
- Xiaoke Chai
- Brain Computer Interface Transitional Research Center, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Center for Neurological DisordersBeijingChina
- Translation Laboratory of Clinical MedicineChinese Institute for Brain Research & Beijing Tiantan HospitalBeijingChina
| | - Tianqing Cao
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Clinical Research Center for Neurological DiseasesBeijingChina
| | - Qiheng He
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Clinical Research Center for Neurological DiseasesBeijingChina
| | - Nan Wang
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Clinical Research Center for Neurological DiseasesBeijingChina
| | - Xuemin Zhang
- National Research Center for Rehabilitation Technical AidsBeijingChina
| | - Xinying Shan
- National Research Center for Rehabilitation Technical AidsBeijingChina
| | - Zeping Lv
- National Research Center for Rehabilitation Technical AidsBeijingChina
| | - Wenjun Tu
- Translation Laboratory of Clinical MedicineChinese Institute for Brain Research & Beijing Tiantan HospitalBeijingChina
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
| | - Yi Yang
- Brain Computer Interface Transitional Research Center, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Center for Neurological DisordersBeijingChina
- Translation Laboratory of Clinical MedicineChinese Institute for Brain Research & Beijing Tiantan HospitalBeijingChina
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Clinical Research Center for Neurological DiseasesBeijingChina
- National Research Center for Rehabilitation Technical AidsBeijingChina
- Beijing Institute of Brain DisordersBeijingChina
- Chinese Institute for Brain ResearchBeijingChina
| | - Jizong Zhao
- Brain Computer Interface Transitional Research Center, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Center for Neurological DisordersBeijingChina
- Translation Laboratory of Clinical MedicineChinese Institute for Brain Research & Beijing Tiantan HospitalBeijingChina
- Department of Neurosurgery, Beijing Tiantan HospitalCapital Medical UniversityBeijingChina
- China National Clinical Research Center for Neurological DiseasesBeijingChina
| |
Collapse
|
5
|
Claret AF, Casali KR, Cunha TS, Moraes MC. Automatic Classification of Emotions Based on Cardiac Signals: A Systematic Literature Review. Ann Biomed Eng 2023; 51:2393-2414. [PMID: 37543539 DOI: 10.1007/s10439-023-03341-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 07/28/2023] [Indexed: 08/07/2023]
Abstract
Emotions play a pivotal role in human cognition, exerting influence across diverse domains of individuals' lives. The widespread adoption of artificial intelligence and machine learning has spurred interest in systems capable of automatically recognizing and classifying emotions and affective states. However, the accurate identification of human emotions remains a formidable challenge, as they are influenced by various factors and accompanied by physiological changes. Numerous solutions have emerged to enable emotion recognition, leveraging the characterization of biological signals, including the utilization of cardiac signals acquired from low-cost and wearable sensors. The objective of this work was to comprehensively investigate the current trends in the field by conducting a Systematic Literature Review (SLR) that focuses specifically on the detection, recognition, and classification of emotions based on cardiac signals, to gain insights into the prevailing techniques employed for signal acquisition, the extracted features, the elicitation process, and the classification methods employed in these studies. A SLR was conducted using four research databases, and articles were assessed concerning the proposed research questions. Twenty seven articles met the selection criteria and were assessed for the feasibility of using cardiac signals, acquired from low-cost and wearable devices, for emotion recognition. Several emotional elicitation methods were found in the literature, including the algorithms applied for automatic classification, as well as the key challenges associated with emotion recognition relying solely on cardiac signals. This study extends the current body of knowledge and enables future research by providing insights into suitable techniques for designing automatic emotion recognition applications. It emphasizes the importance of utilizing low-cost, wearable, and unobtrusive devices to acquire cardiac signals for accurate and accessible emotion recognition.
Collapse
Affiliation(s)
- Anderson Faria Claret
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Karina Rabello Casali
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| | - Tatiana Sousa Cunha
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil.
| | - Matheus Cardoso Moraes
- Institute of Science and Technology, Federal University of São Paulo, São José dos Campos, Brazil
| |
Collapse
|
6
|
Yousefi MR, Dehghani A, Taghaavifar H. Enhancing the accuracy of electroencephalogram-based emotion recognition through Long Short-Term Memory recurrent deep neural networks. Front Hum Neurosci 2023; 17:1174104. [PMID: 37881690 PMCID: PMC10597690 DOI: 10.3389/fnhum.2023.1174104] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/25/2023] [Indexed: 10/27/2023] Open
Abstract
Introduction Emotions play a critical role in human communication, exerting a significant influence on brain function and behavior. One effective method of observing and analyzing these emotions is through electroencephalography (EEG) signals. Although numerous studies have been dedicated to emotion recognition (ER) using EEG signals, achieving improved accuracy in recognition remains a challenging task. To address this challenge, this paper presents a deep-learning approach for ER using EEG signals. Background ER is a dynamic field of research with diverse practical applications in healthcare, human-computer interaction, and affective computing. In ER studies, EEG signals are frequently employed as they offer a non-invasive and cost-effective means of measuring brain activity. Nevertheless, accurately identifying emotions from EEG signals poses a significant challenge due to the intricate and non-linear nature of these signals. Methods The present study proposes a novel approach for ER that encompasses multiple stages, including feature extraction, feature selection (FS) employing clustering, and classification using Dual-LSTM. To conduct the experiments, the DEAP dataset was employed, wherein a clustering technique was applied to Hurst's view and statistical features during the FS phase. Ultimately, Dual-LSTM was employed for accurate ER. Results The proposed method achieved a remarkable accuracy of 97.5% in accurately classifying emotions across four categories: arousal, valence, liking/disliking, dominance, and familiarity. This high level of accuracy serves as strong evidence for the effectiveness of the deep-learning approach to emotion recognition (ER) utilizing EEG signals. Conclusion The deep-learning approach proposed in this paper has shown promising results in emotion recognition using EEG signals. This method can be useful in various applications, such as developing more effective therapies for individuals with mood disorders or improving human-computer interaction by allowing machines to respond more intelligently to users' emotional states. However, further research is needed to validate the proposed method on larger datasets and to investigate its applicability to real-world scenarios.
Collapse
Affiliation(s)
- Mohammad Reza Yousefi
- Department of Electrical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
- Digital Processing and Machine Vision Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
| | - Amin Dehghani
- Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Hamid Taghaavifar
- Digital Processing and Machine Vision Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
| |
Collapse
|
7
|
Xia Y, Liu Y. EEG-Based Emotion Recognition with Consideration of Individual Difference. SENSORS (BASEL, SWITZERLAND) 2023; 23:7749. [PMID: 37765808 PMCID: PMC10535213 DOI: 10.3390/s23187749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
Electroencephalograms (EEGs) are often used for emotion recognition through a trained EEG-to-emotion models. The training samples are EEG signals recorded while participants receive external induction labeled as various emotions. Individual differences such as emotion degree and time response exist under the same external emotional inductions. These differences can lead to a decrease in the accuracy of emotion classification models in practical applications. The brain-based emotion recognition model proposed in this paper is able to sufficiently consider these individual differences. The proposed model comprises an emotion classification module and an individual difference module (IDM). The emotion classification module captures the spatial and temporal features of the EEG data, while the IDM introduces personalized adjustments to specific emotional features by accounting for participant-specific variations as a form of interference. This approach aims to enhance the classification performance of EEG-based emotion recognition for diverse participants. The results of our comparative experiments indicate that the proposed method obtains a maximum accuracy of 96.43% for binary classification on DEAP data. Furthermore, it performs better in scenarios with significant individual differences, where it reaches a maximum accuracy of 98.92%.
Collapse
Affiliation(s)
- Yuxiao Xia
- College of Automation, Qingdao University, Qingdao 266071, China;
| | - Yinhua Liu
- Insititute for Future, Qingdao University, Qingdao 266071, China
| |
Collapse
|
8
|
Eyvazpour R, Navi FFT, Shakeri E, Nikzad B, Heysieattalab S. Machine learning-based classifying of risk-takers and risk-aversive individuals using resting-state EEG data: A pilot feasibility study. Brain Behav 2023; 13:e3139. [PMID: 37366037 PMCID: PMC10498077 DOI: 10.1002/brb3.3139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/29/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023] Open
Abstract
BACKGROUND Decision-making is vital in interpersonal interactions and a country's economic and political conditions. People, especially managers, have to make decisions in different risky situations. There has been a growing interest in identifying managers' personality traits (i.e., risk-taking or risk-averse) in recent years. Although there are findings of signal decision-making and brain activity, the implementation of an intelligent brain-based technique to predict risk-averse and risk-taking managers is still in doubt. METHODS This study proposes an electroencephalogram (EEG)-based intelligent system to distinguish risk-taking managers from risk-averse ones by recording the EEG signals from 30 managers. In particular, wavelet transform, a time-frequency domain analysis method, was used on resting-state EEG data to extract statistical features. Then, a two-step statistical wrapper algorithm was used to select the appropriate features. The support vector machine classifier, a supervised learning method, was used to classify two groups of managers using chosen features. RESULTS Intersubject predictive performance could classify two groups of managers with 74.42% accuracy, 76.16% sensitivity, 72.32% specificity, and 75% F1-measure, indicating that machine learning (ML) models can distinguish between risk-taking and risk-averse managers using the features extracted from the alpha frequency band in 10 s analysis window size. CONCLUSIONS The findings of this study demonstrate the potential of using intelligent (ML-based) systems in distinguish between risk-taking and risk-averse managers using biological signals.
Collapse
Affiliation(s)
- Reza Eyvazpour
- Department of Biomedical Engineering, School of Electrical EngineeringIran University of Science and Technology (IUST)TehranIran
| | | | - Elmira Shakeri
- Department of Business Management, Faculty of Management and AccountingAllameh Tabataba'i UniversityTehranIran
| | - Behzad Nikzad
- Department of Cognitive NeuroscienceUniversity of TabrizTabrizIran
- Neurobioscince DivisionResearch Center of Bioscience and Biotechnology, University of TabrizTabrizIran
| | | |
Collapse
|
9
|
Su H, Qi W, Chen J, Yang C, Sandoval J, Laribi MA. Recent advancements in multimodal human-robot interaction. Front Neurorobot 2023; 17:1084000. [PMID: 37250671 PMCID: PMC10210148 DOI: 10.3389/fnbot.2023.1084000] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 04/20/2023] [Indexed: 05/31/2023] Open
Abstract
Robotics have advanced significantly over the years, and human-robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript.
Collapse
Affiliation(s)
- Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Wen Qi
- School of Future Technology, South China University of Technology, Guangzhou, China
| | - Jiahao Chen
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chenguang Yang
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
| | - Juan Sandoval
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| | - Med Amine Laribi
- Department of GMSC, Pprime Institute, CNRS, ENSMA, University of Poitiers, Poitiers, France
| |
Collapse
|
10
|
Yadav H, Maini S. Electroencephalogram based brain-computer interface: Applications, challenges, and opportunities. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-45. [PMID: 37362726 PMCID: PMC10157593 DOI: 10.1007/s11042-023-15653-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 07/17/2022] [Accepted: 04/22/2023] [Indexed: 06/28/2023]
Abstract
Brain-Computer Interfaces (BCI) is an exciting and emerging research area for researchers and scientists. It is a suitable combination of software and hardware to operate any device mentally. This review emphasizes the significant stages in the BCI domain, current problems, and state-of-the-art findings. This article also covers how current results can contribute to new knowledge about BCI, an overview of BCI from its early developments to recent advancements, BCI applications, challenges, and future directions. The authors pointed to unresolved issues and expressed how BCI is valuable for analyzing the human brain. Humans' dependence on machines has led humankind into a new future where BCI can play an essential role in improving this modern world.
Collapse
Affiliation(s)
- Hitesh Yadav
- Department of Electrical and Instrumentation Engineering, Sant Longowal Institute of Engineering & Technology, Longowal, Punjab India
| | - Surita Maini
- Department of Electrical and Instrumentation Engineering, Sant Longowal Institute of Engineering & Technology, Longowal, Punjab India
| |
Collapse
|
11
|
Goshvarpour A, Goshvarpour A. Emotion Recognition Using a Novel Granger Causality Quantifier and Combined Electrodes of EEG. Brain Sci 2023; 13:brainsci13050759. [PMID: 37239231 DOI: 10.3390/brainsci13050759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 04/30/2023] [Accepted: 05/02/2023] [Indexed: 05/28/2023] Open
Abstract
Electroencephalogram (EEG) connectivity patterns can reflect neural correlates of emotion. However, the necessity of evaluating bulky data for multi-channel measurements increases the computational cost of the EEG network. To date, several approaches have been presented to pick the optimal cerebral channels, mainly depending on available data. Consequently, the risk of low data stability and reliability has increased by reducing the number of channels. Alternatively, this study suggests an electrode combination approach in which the brain is divided into six areas. After extracting EEG frequency bands, an innovative Granger causality-based measure was introduced to quantify brain connectivity patterns. The feature was subsequently subjected to a classification module to recognize valence-arousal dimensional emotions. A Database for Emotion Analysis Using Physiological Signals (DEAP) was used as a benchmark database to evaluate the scheme. The experimental results revealed a maximum accuracy of 89.55%. Additionally, EEG-based connectivity in the beta-frequency band was able to effectively classify dimensional emotions. In sum, combined EEG electrodes can efficiently replicate 32-channel EEG information.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz 51335-1996, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Mashhad 91388-3186, Iran
| |
Collapse
|
12
|
Sajno E, Bartolotta S, Tuena C, Cipresso P, Pedroli E, Riva G. Machine learning in biosignals processing for mental health: A narrative review. Front Psychol 2023; 13:1066317. [PMID: 36710855 PMCID: PMC9880193 DOI: 10.3389/fpsyg.2022.1066317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/16/2022] [Indexed: 01/15/2023] Open
Abstract
Machine Learning (ML) offers unique and powerful tools for mental health practitioners to improve evidence-based psychological interventions and diagnoses. Indeed, by detecting and analyzing different biosignals, it is possible to differentiate between typical and atypical functioning and to achieve a high level of personalization across all phases of mental health care. This narrative review is aimed at presenting a comprehensive overview of how ML algorithms can be used to infer the psychological states from biosignals. After that, key examples of how they can be used in mental health clinical activity and research are illustrated. A description of the biosignals typically used to infer cognitive and emotional correlates (e.g., EEG and ECG), will be provided, alongside their application in Diagnostic Precision Medicine, Affective Computing, and brain-computer Interfaces. The contents will then focus on challenges and research questions related to ML applied to mental health and biosignals analysis, pointing out the advantages and possible drawbacks connected to the widespread application of AI in the medical/mental health fields. The integration of mental health research and ML data science will facilitate the transition to personalized and effective medicine, and, to do so, it is important that researchers from psychological/ medical disciplines/health care professionals and data scientists all share a common background and vision of the current research.
Collapse
Affiliation(s)
- Elena Sajno
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Computer Science, University of Pisa, Pisa, Italy,*Correspondence: Elena Sajno, ✉
| | - Sabrina Bartolotta
- ExperienceLab, Università Cattolica del Sacro Cuore, Milan, Italy,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy,Department of Psychology, University of Turin, Turin, Italy
| | - Elisa Pedroli
- Department of Psychology, eCampus University, Novedrate, Italy
| | - Giuseppe Riva
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milan, Italy,Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
13
|
Long-Term Exercise Assistance: Group and One-on-One Interactions between a Social Robot and Seniors. ROBOTICS 2023. [DOI: 10.3390/robotics12010009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
For older adults, regular exercises can provide both physical and mental benefits, increase their independence, and reduce the risks of diseases associated with aging. However, only a small portion of older adults regularly engage in physical activity. Therefore, it is important to promote exercise among older adults to help maintain overall health. In this paper, we present the first exploratory long-term human–robot interaction (HRI) study conducted at a local long-term care facility to investigate the benefits of one-on-one and group exercise interactions with an autonomous socially assistive robot and older adults. To provide targeted facilitation, our robot utilizes a unique emotion model that can adapt its assistive behaviors to users’ affect and track their progress towards exercise goals through repeated sessions using the Goal Attainment Scale (GAS), while also monitoring heart rate to prevent overexertion. Results of the study show that users had positive valence and high engagement towards the robot and were able to maintain their exercise performance throughout the study. Questionnaire results showed high robot acceptance for both types of interactions. However, users in the one-on-one sessions perceived the robot as more sociable and intelligent, and had more positive perception of the robot’s appearance and movements.
Collapse
|
14
|
Identifying Thematics in a Brain-Computer Interface Research. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:2793211. [PMID: 36643889 PMCID: PMC9833923 DOI: 10.1155/2023/2793211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/05/2023]
Abstract
This umbrella review is motivated to understand the shift in research themes on brain-computer interfacing (BCI) and it determined that a shift away from themes that focus on medical advancement and system development to applications that included education, marketing, gaming, safety, and security has occurred. The background of this review examined aspects of BCI categorisation, neuroimaging methods, brain control signal classification, applications, and ethics. The specific area of BCI software and hardware development was not examined. A search using One Search was undertaken and 92 BCI reviews were selected for inclusion. Publication demographics indicate the average number of authors on review papers considered was 4.2 ± 1.8. The results also indicate a rapid increase in the number of BCI reviews from 2003, with only three reviews before that period, two in 1972, and one in 1996. While BCI authors were predominantly Euro-American in early reviews, this shifted to a more global authorship, which China dominated by 2020-2022. The review revealed six disciplines associated with BCI systems: life sciences and biomedicine (n = 42), neurosciences and neurology (n = 35), and rehabilitation (n = 20); (2) the second domain centred on the theme of functionality: computer science (n = 20), engineering (n = 28) and technology (n = 38). There was a thematic shift from understanding brain function and modes of interfacing BCI systems to more applied research novel areas of research-identified surround artificial intelligence, including machine learning, pre-processing, and deep learning. As BCI systems become more invasive in the lives of "normal" individuals, it is expected that there will be a refocus and thematic shift towards increased research into ethical issues and the need for legal oversight in BCI application.
Collapse
|
15
|
Identifying Complex Emotions in Alexithymia Affected Adolescents Using Machine Learning Techniques. Diagnostics (Basel) 2022; 12:diagnostics12123188. [PMID: 36553197 PMCID: PMC9777297 DOI: 10.3390/diagnostics12123188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 10/30/2022] [Accepted: 11/10/2022] [Indexed: 12/24/2022] Open
Abstract
Many scientific researchers' study focuses on enhancing automated systems to identify emotions and thus relies on brain signals. This study focuses on how brain wave signals can be used to classify many emotional states of humans. Electroencephalography (EEG)-based affective computing predominantly focuses on emotion classification based on facial expression, speech recognition, and text-based recognition through multimodality stimuli. The proposed work aims to implement a methodology to identify and codify discrete complex emotions such as pleasure and grief in a rare psychological disorder known as alexithymia. This type of disorder is highly elicited in unstable, fragile countries such as South Sudan, Lebanon, and Mauritius. These countries are continuously affected by civil wars and disaster and politically unstable, leading to a very poor economy and education system. This study focuses on an adolescent age group dataset by recording physiological data when emotion is exhibited in a multimodal virtual environment. We decocted time frequency analysis and amplitude time series correlates including frontal alpha symmetry using a complex Morlet wavelet. For data visualization, we used the UMAP technique to obtain a clear district view of emotions. We performed 5-fold cross validation along with 1 s window subjective classification on the dataset. We opted for traditional machine learning techniques to identify complex emotion labeling.
Collapse
|
16
|
Emsawas T, Morita T, Kimura T, Fukui KI, Numao M. Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8250. [PMID: 36365948 PMCID: PMC9654218 DOI: 10.3390/s22218250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/22/2022] [Accepted: 10/23/2022] [Indexed: 06/16/2023]
Abstract
Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain-computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model's learning capacity.
Collapse
Affiliation(s)
- Taweesak Emsawas
- Graduate School of Information Science and Technology, Osaka University, Osaka 565-0871, Japan
| | - Takashi Morita
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Tsukasa Kimura
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Ken-ichi Fukui
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Masayuki Numao
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| |
Collapse
|
17
|
Fu Z, Zhang B, He X, Li Y, Wang H, Huang J. Emotion recognition based on multi-modal physiological signals and transfer learning. Front Neurosci 2022; 16:1000716. [PMID: 36161186 PMCID: PMC9493208 DOI: 10.3389/fnins.2022.1000716] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/18/2022] [Indexed: 11/13/2022] Open
Abstract
In emotion recognition based on physiological signals, collecting enough labeled data of a single subject for training is time-consuming and expensive. The physiological signals’ individual differences and the inherent noise will significantly affect emotion recognition accuracy. To overcome the difference in subject physiological signals, we propose a joint probability domain adaptation with the bi-projection matrix algorithm (JPDA-BPM). The bi-projection matrix method fully considers the source and target domain’s different feature distributions. It can better project the source and target domains into the feature space, thereby increasing the algorithm’s performance. We propose a substructure-based joint probability domain adaptation algorithm (SSJPDA) to overcome physiological signals’ noise effect. This method can avoid the shortcomings that the domain level matching is too rough and the sample level matching is susceptible to noise. In order to verify the effectiveness of the proposed transfer learning algorithm in emotion recognition based on physiological signals, we verified it on the database for emotion analysis using physiological signals (DEAP dataset). The experimental results show that the average recognition accuracy of the proposed SSJPDA-BPM algorithm in the multimodal fusion physiological data from the DEAP dataset is 63.6 and 64.4% in valence and arousal, respectively. Compared with joint probability domain adaptation (JPDA), the performance of valence and arousal recognition accuracy increased by 17.6 and 13.4%, respectively.
Collapse
|
18
|
Li JW, Chen RJ, Barma S, Chen F, Pun SH, Mak PU, Wang LJ, Zeng XX, Ren JC, Zhao HM. An Approach to Emotion Recognition Using Brain Rhythm Sequencing and Asymmetric Features. Cognit Comput 2022; 14:2260-2273. [PMID: 36043053 PMCID: PMC9415250 DOI: 10.1007/s12559-022-10053-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 08/14/2022] [Indexed: 11/26/2022]
Abstract
Emotion can be influenced during self-isolation, and to avoid severe mood swings, emotional regulation is meaningful. To achieve this, efficiently recognizing emotion is a vital step, which can be realized by electroencephalography signals. Previously, inspired by the knowledge of sequencing in bioinformatics, a method termed brain rhythm sequencing that analyzes electroencephalography as the sequence consisting of the dominant rhythm has been proposed for seizure detection. In this work, with the help of similarity measure methods, the asymmetric features are extracted from the sequences generated by different channel data. After evaluating all asymmetric features for emotion recognition, the optimal feature that yields remarkable accuracy is identified. Therefore, the classification task can be accomplished through a small amount of channel data. From a music emotion recognition experiment and a public DEAP dataset, the classification accuracies of various test sets are approximately 80–85% when employing an optimal feature extracted from one pair of symmetrical channels. Such performances are impressive when using fewer resources is a concern. Further investigation revealed that emotion recognition shows strongly individual characteristics, so an appropriate solution is to include the subject-dependent properties. Compared to the existing works, this method benefits from the design of a portable emotion-aware device used during self-isolation, as fewer scalp sensors are needed. Hence, it would provide a novel way to realize emotional applications in the future.
Collapse
Affiliation(s)
- Jia Wen Li
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
- Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin, 541004 China
| | - Rong Jun Chen
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
| | - Shovan Barma
- Department of Electronics and Communication Engineering, Indian Institute of Information Technology Guwahati, Guwahati, 781015 India
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Sio Hang Pun
- State Key Laboratory of Analog and Mixed-Signal VLSI, University of Macau, Macau, 999078 China
| | - Peng Un Mak
- Department of Electrical and Computer Engineering, University of Macau, Macau, 999078 China
| | - Lei Jun Wang
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
| | - Xian Xian Zeng
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
| | - Jin Chang Ren
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
- National Subsea Centre, Robert Gordon University, Aberdeen, AB21 0BH UK
| | - Hui Min Zhao
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, 510665 China
| |
Collapse
|
19
|
Torres-Cardona HF, Aguirre-Grisales C. Brain-Computer Music Interface, a bibliometric analysis. BRAIN-COMPUTER INTERFACES 2022. [DOI: 10.1080/2326263x.2022.2109313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
20
|
EEG-Based Empathic Safe Cobot. MACHINES 2022. [DOI: 10.3390/machines10080603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
An empathic collaborative robot (cobot) was realized through the transmission of fear from a human agent to a robot agent. Such empathy was induced through an electroencephalographic (EEG) sensor worn by the human agent, thus realizing an empathic safe brain-computer interface (BCI). The empathic safe cobot reacts to the fear and in turn transmits it to the human agent, forming a social circle of empathy and safety. A first randomized, controlled experiment involved two groups of 50 healthy subjects (100 total subjects) to measure the EEG signal in the presence or absence of a frightening event. The second randomized, controlled experiment on two groups of 50 different healthy subjects (100 total subjects) exposed the subjects to comfortable and uncomfortable movements of a collaborative robot (cobot) while the subjects’ EEG signal was acquired. The result was that a spike in the subject’s EEG signal was observed in the presence of uncomfortable movement. The questionnaires were distributed to the subjects, and confirmed the results of the EEG signal measurement. In a controlled laboratory setting, all experiments were found to be statistically significant. In the first experiment, the peak EEG signal measured just after the activating event was greater than the resting EEG signal (p < 10−3). In the second experiment, the peak EEG signal measured just after the uncomfortable movement of the cobot was greater than the EEG signal measured under conditions of comfortable movement of the cobot (p < 10−3). In conclusion, within the isolated and constrained experimental environment, the results were satisfactory.
Collapse
|
21
|
Goshvarpour A, Goshvarpour A. Innovative Poincare's plot asymmetry descriptors for EEG emotion recognition. Cogn Neurodyn 2022; 16:545-559. [PMID: 35603058 PMCID: PMC9120274 DOI: 10.1007/s11571-021-09735-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 09/18/2021] [Accepted: 10/13/2021] [Indexed: 10/20/2022] Open
Abstract
Given the importance of emotion recognition in both medical and non-medical applications, designing an automatic system has captured the attention of several scholars. Currently, EEG-based emotion recognition has a special position, which has not fulfilled the desired accuracy rates yet. This experiment intended to provide novel EEG asymmetry measures to improve emotion recognition rates. Four emotional states have been classified using the k-nearest neighbor (kNN), support vector machine, and Naïve Bayes. Feature selection has been performed, and the role of employing a different number of top-ranked features on emotion recognition rates has been assessed. To validate the efficiency of the proposed scheme, two public databases, including the SJTU Emotion EEG Dataset-IV (SEED-IV) and a Database for Emotion Analysis using Physiological signals (DEAP) were evaluated. The experimental results indicated that kNN outperformed the other classifiers with a maximum accuracy of 95.49 and 98.63% using SEED-IV and DEAP datasets, respectively. In conclusion, the results of the proposed novel EEG-asymmetry measures make the framework a superior one compared to the state-of-art EEG emotion recognition approaches.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Rezvan Campus, Phalestine Sq., Mashhad, Razavi Khorasan Iran
| |
Collapse
|
22
|
Abstract
In education, it is critical to monitor students’ attention and measure the extents to which students participate and the differences in their levels and abilities. The overall goal of this study was to increase the quality of distance education. In particular, in order to craft an approach that will effectively augment online learning using objective measures of brain activity, we propose a brain–computer interface (BCI) system that aims to use electroencephalography (EEG) signals for the detection of student’s attention during online classes. This system will aid teachers to objectively assess student attention and engagement. To this end, experiments were conducted on a public dataset; we extracted power spectral density (PSD) features using used a fast Fourier transform. Different attention indexes were calculated. Then, we built three different classification algorithms: k-nearest neighbors (KNN), support vector machine (SVM), and random forest (RF). Our proposed random forest classifier achieved a higher accuracy (96%) than KNN and SVM. Moreover, our results compared to state-of-the-art attention-detection systems with respect to the same dataset. Our findings revealed that the proposed RF approach can be used to effectively distinguish the attention state of a user.
Collapse
|
23
|
Sparse representations of high dimensional neural data. Sci Rep 2022; 12:7295. [PMID: 35508638 PMCID: PMC9068763 DOI: 10.1038/s41598-022-10459-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 04/01/2022] [Indexed: 11/08/2022] Open
Abstract
Conventional Vector Autoregressive (VAR) modelling methods applied to high dimensional neural time series data result in noisy solutions that are dense or have a large number of spurious coefficients. This reduces the speed and accuracy of auxiliary computations downstream and inflates the time required to compute functional connectivity networks by a factor that is at least inversely proportional to the true network density. As these noisy solutions have distorted coefficients, thresholding them as per some criterion, statistical or otherwise, does not alleviate the problem. Thus obtaining a sparse representation of such data is important since it provides an efficient representation of the data and facilitates its further analysis. We propose a fast Sparse Vector Autoregressive Greedy Search (SVARGS) method that works well for high dimensional data, even when the number of time points is relatively low, by incorporating only statistically significant coefficients. In numerical experiments, our methods show high accuracy in recovering the true sparse model. The relative absence of spurious coefficients permits accurate, stable and fast evaluation of derived quantities such as power spectrum, coherence and Granger causality. Consequently, sparse functional connectivity networks can be computed, in a reasonable time, from data comprising tens of thousands of channels/voxels. This enables a much higher resolution analysis of functional connectivity patterns and community structures in such large networks than is possible using existing time series methods. We apply our method to EEG data where computed network measures and community structures are used to distinguish emotional states as well as to ADHD fMRI data where it is used to distinguish children with ADHD from typically developing children.
Collapse
|
24
|
Värbu K, Muhammad N, Muhammad Y. Past, Present, and Future of EEG-Based BCI Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:3331. [PMID: 35591021 PMCID: PMC9101004 DOI: 10.3390/s22093331] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 04/05/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
An electroencephalography (EEG)-based brain-computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed.
Collapse
Affiliation(s)
- Kaido Värbu
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Naveed Muhammad
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Yar Muhammad
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK;
| |
Collapse
|
25
|
Generator-based Domain Adaptation Method with Knowledge Free for Cross-subject EEG Emotion Recognition. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
26
|
Alsowail RA, Al-Shehari T. Techniques and countermeasures for preventing insider threats. PeerJ Comput Sci 2022; 8:e938. [PMID: 35494800 PMCID: PMC9044369 DOI: 10.7717/peerj-cs.938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
With the wide use of technologies nowadays, various security issues have emerged. Public and private sectors are both spending a large portion of their budget to protect the confidentiality, integrity, and availability of their data from possible attacks. Among these attacks are insider attacks which are more serious than external attacks, as insiders are authorized users who have legitimate access to sensitive assets of an organization. As a result, several studies exist in the literature aimed to develop techniques and tools to detect and prevent various types of insider threats. This article reviews different techniques and countermeasures that are proposed to prevent insider attacks. A unified classification model is proposed to classify the insider threat prevention approaches into two categories (biometric-based and asset-based metric). The biometric-based category is also classified into (physiological, behavioral and physical), while the asset metric-based category is also classified into (host, network and combined). This classification systematizes the reviewed approaches that are validated with empirical results utilizing the grounded theory method for rigorous literature review. Additionally, the article compares and discusses significant theoretical and empirical factors that play a key role in the effectiveness of insider threat prevention approaches (e.g., datasets, feature domains, classification algorithms, evaluation metrics, real-world simulation, stability and scalability, etc.). Major challenges are also highlighted which need to be considered when deploying real-world insider threat prevention systems. Some research gaps and recommendations are also presented for future research directions.
Collapse
Affiliation(s)
- Rakan A. Alsowail
- Computer Skills, Self-Development Department, Deanship of Common First Year, King Saud University, Riyadh, Saudi Arabia
| | - Taher Al-Shehari
- Computer Skills, Self-Development Department, Deanship of Common First Year, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Emotion Self-Regulation in Neurotic Students: A Pilot Mindfulness-Based Intervention to Assess Its Effectiveness through Brain Signals and Behavioral Data. SENSORS 2022; 22:s22072703. [PMID: 35408317 PMCID: PMC9002961 DOI: 10.3390/s22072703] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 02/24/2022] [Accepted: 03/04/2022] [Indexed: 12/13/2022]
Abstract
Neuroticism has recently received increased attention in the psychology field due to the finding of high implications of neuroticism on an individual’s life and broader public health. This study aims to investigate the effect of a brief 6-week breathing-based mindfulness intervention (BMI) on undergraduate neurotic students’ emotion regulation. We acquired data of their psychological states, physiological changes, and electroencephalogram (EEG), before and after BMI, in resting states and tasks. Through behavioral analysis, we found the students’ anxiety and stress levels significantly reduced after BMI, with p-values of 0.013 and 0.027, respectively. Furthermore, a significant difference between students in emotion regulation strategy, that is, suppression, was also shown. The EEG analysis demonstrated significant differences between students before and after MI in resting states and tasks. Fp1 and O2 channels were identified as the most significant channels in evaluating the effect of BMI. The potential of these channels for classifying (single-channel-based) before and after BMI conditions during eyes-opened and eyes-closed baseline trials were displayed by a good performance in terms of accuracy (~77%), sensitivity (76–80%), specificity (73–77%), and area-under-the-curve (AUC) (0.66–0.8) obtained by k-nearest neighbor (KNN) and support vector machine (SVM) algorithms. Mindfulness can thus improve the self-regulation of the emotional state of neurotic students based on the psychometric and electrophysiological analyses conducted in this study.
Collapse
|
28
|
Zhao H, Liu J, Shen Z, Yan J. SCC-MPGCN: Self-Attention Coherence Clustering Based on Multi-Pooling Graph Convolutional Network for EEG Emotion Recognition. J Neural Eng 2022; 19. [PMID: 35354132 DOI: 10.1088/1741-2552/ac6294] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 03/29/2022] [Indexed: 11/12/2022]
Abstract
The emotion recognition with electroencephalography (EEG) has been widely studied using the deep learning methods, but the topology of EEG channels is rarely exploited completely. In this paper, we propose a self-attention coherence clustering based on multi-pooling graph convolutional network (SCC-MPGCN) model for EEG emotion recognition. locking value (PLV)The adjacency matrix is constructed based on phase- to describe the intrinsic relationship between different EEG electrodes as graph signals. The Laplacian matrix of a graph is obtained from the adjacency matrix and then is fed into the graph convolutional layers to learn the generalized features. Moreover, we propose a novel graph coarsening method called self-attention coherence clustering (SCC), using the coherence to cluster the nodes. The benefits are that the global information can be achieved from the raw data and the dimensionality of input can be reduced. Meanwhile, a multi-pooling graph convolutional network (MPGCN) block is introduced to learn the generalized emotional states features and tackle the problem of imbalanced dimensionality. The fully-connected layer and a softmax layer are adopted to drive the final prediction. We carry out the extensive experiments on DEAP dataset and the experimental results show that the proposed method has better classification results than the state-of-the-art methods with the 10-fold cross-validation. And the model achieves the emotion recognition performance with a mean accuracy of 96.37%, 97.02%, 96.72% on valence, arousal, and dominance dimension, respectively.
Collapse
Affiliation(s)
- Huijuan Zhao
- Tiangong University, No. 399, Binshui West Road, Xiqing District, Tianjin, Tianjin, Tianjin, 300387, CHINA
| | - Jingjin Liu
- Shantou University, No.243, Daxue Road, Jinping District, Shantou City, Guangdong Province, Shantou, Guangdong, 515063, CHINA
| | - Zhenqian Shen
- Tiangong University, No. 399, Binshui West Road, Xiqing District, Tianjin, Tianjin, Tianjin, 300387, CHINA
| | - Jingwen Yan
- Shantou University, No.243, Daxue Road, Jinping District, Shantou City, Guangdong Province, Shantou, 515063, CHINA
| |
Collapse
|
29
|
Kumari N, Anwar S, Bhattacharjee V. Time series-dependent feature of EEG signals for improved visually evoked emotion classification using EmotionCapsNet. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06942-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
30
|
Yu M, Xiao S, Hua M, Wang H, Chen X, Tian F, Li Y. EEG-based emotion recognition in an immersive virtual reality environment: From local activity to brain network features. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103349] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
31
|
Review on EEG-Based Authentication Technology. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2021:5229576. [PMID: 34976039 PMCID: PMC8720016 DOI: 10.1155/2021/5229576] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/25/2021] [Accepted: 12/11/2021] [Indexed: 11/24/2022]
Abstract
With the rapid development of brain-computer interface technology, as a new biometric feature, EEG signal has been widely concerned in recent years. The safety of brain-computer interface and the long-term insecurity of biometric authentication have a new solution. This review analyzes the biometrics of EEG signals, and the latest research is involved in the authentication process. This review mainly introduced the method of EEG-based authentication and systematically introduced EEG-based biometric cryptosystems for authentication for the first time. In cryptography, the key is the core basis of authentication in the cryptographic system, and cryptographic technology can effectively improve the security of biometric authentication and protect biometrics. The revocability of EEG-based biometric cryptosystems is an advantage that traditional biometric authentication does not have. Finally, the existing problems and future development directions of identity authentication technology based on EEG signals are proposed, providing a reference for the related studies.
Collapse
|
32
|
Bilucaglia M, Duma GM, Mento G, Semenzato L, Tressoldi PE. Applying machine learning EEG signal classification to emotion‑related brain anticipatory activity. F1000Res 2021; 9:173. [PMID: 37899775 PMCID: PMC10603316 DOI: 10.12688/f1000research.22202.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/11/2021] [Indexed: 10/31/2023] Open
Abstract
Machine learning approaches have been fruitfully applied to several neurophysiological signal classification problems. Considering the relevance of emotion in human cognition and behaviour, an important application of machine learning has been found in the field of emotion identification based on neurophysiological activity. Nonetheless, there is high variability in results in the literature depending on the neuronal activity measurement, the signal features and the classifier type. The present work aims to provide new methodological insight into machine learning applied to emotion identification based on electrophysiological brain activity. For this reason, we analysed previously recorded EEG activity measured while emotional stimuli, high and low arousal (auditory and visual) were provided to a group of healthy participants. Our target signal to classify was the pre-stimulus onset brain activity. Classification performance of three different classifiers (LDA, SVM and kNN) was compared using both spectral and temporal features. Furthermore, we also contrasted the performance of static and dynamic (time evolving) approaches. The best static feature-classifier combination was the SVM with spectral features (51.8%), followed by LDA with spectral features (51.4%) and kNN with temporal features (51%). The best dynamic feature classifier combination was the SVM with temporal features (63.8%), followed by kNN with temporal features (63.70%) and LDA with temporal features (63.68%). The results show a clear increase in classification accuracy with temporal dynamic features.
Collapse
Affiliation(s)
| | - Gian Marco Duma
- Department of Developmental and Social Psychology (DPSS), Università degli Studi di Padova, Padova, Italy
| | - Giovanni Mento
- Department of General Psychology, Università degli Studi di Padova, Padova, Italy
| | - Luca Semenzato
- Department of General Psychology, Università degli Studi di Padova, Padova, Italy
| | - Patrizio E. Tressoldi
- Science of Consciousness Research Group, Studium Patavinum, Università degli Studi di Padova, Padova, Italy
| |
Collapse
|
33
|
EEG Mental Stress Assessment Using Hybrid Multi-Domain Feature Sets of Functional Connectivity Network and Time-Frequency Features. SENSORS 2021; 21:s21186300. [PMID: 34577505 PMCID: PMC8473213 DOI: 10.3390/s21186300] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/07/2021] [Accepted: 09/16/2021] [Indexed: 12/28/2022]
Abstract
Exposure to mental stress for long period leads to serious accidents and health problems. To avoid negative consequences on health and safety, it is very important to detect mental stress at its early stages, i.e., when it is still limited to acute or episodic stress. In this study, we developed an experimental protocol to induce two different levels of stress by utilizing a mental arithmetic task with time pressure and negative feedback as the stressors. We assessed the levels of stress on 22 healthy subjects using frontal electroencephalogram (EEG) signals, salivary alpha-amylase level (AAL), and multiple machine learning (ML) classifiers. The EEG signals were analyzed using a fusion of functional connectivity networks estimated by the Phase Locking Value (PLV) and temporal and spectral domain features. A total of 210 different features were extracted from all domains. Only the optimum multi-domain features were used for classification. We then quantified stress levels using statistical analysis and seven ML classifiers. Our result showed that the AAL level was significantly increased (p < 0.01) under stress condition in all subjects. Likewise, the functional connectivity network demonstrated a significant decrease under stress, p < 0.05. Moreover, we achieved the highest stress classification accuracy of 93.2% using the Support Vector Machine (SVM) classifier. Other classifiers produced relatively similar results.
Collapse
|
34
|
Long F, Zhao S, Wei X, Ng SC, Ni X, Chi A, Fang P, Zeng W, Wei B. Positive and Negative Emotion Classification Based on Multi-channel. Front Behav Neurosci 2021; 15:720451. [PMID: 34512288 PMCID: PMC8428531 DOI: 10.3389/fnbeh.2021.720451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 07/29/2021] [Indexed: 11/13/2022] Open
Abstract
The EEG features of different emotions were extracted based on multi-channel and forehead channels in this study. The EEG signals of 26 subjects were collected by the emotional video evoked method. The results show that the energy ratio and differential entropy of the frequency band can be used to classify positive and negative emotions effectively, and the best effect can be achieved by using an SVM classifier. When only the forehead and forehead signals are used, the highest classification accuracy can reach 66%. When the data of all channels are used, the highest accuracy of the model can reach 82%. After channel selection, the best model of this study can be obtained. The accuracy is more than 86%.
Collapse
Affiliation(s)
- Fangfang Long
- Department of Psychology, Nanjing University, Nanjing, China
| | - Shanguang Zhao
- Centre for Sport and Exercise Sciences, University of Malaya, Kuala Lumpur, Malaysia
| | - Xin Wei
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China.,Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Siew-Cheok Ng
- Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
| | - Xiaoli Ni
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China
| | - Aiping Chi
- School of Sports, Shaanxi Normal University, Xi'an, China
| | - Peng Fang
- Department of the Psychology of Military Medicine, Air Force Medical University, Xi'an, China
| | - Weigang Zeng
- Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Bokun Wei
- Xi'an Middle School of Shaanxi Province, Xi'an, China
| |
Collapse
|
35
|
Huang C, Xiao Y, Xu G. Predicting Human Intention-Behavior Through EEG Signal Analysis Using Multi-Scale CNN. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1722-1729. [PMID: 33226953 DOI: 10.1109/tcbb.2020.3039834] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
At present, the application of Electroencephalogram (EEG) signal classification to human intention-behavior prediction has become a hot topic in the brain computer interface (BCI) research field. In recent studies, the introduction of convolutional neural networks (CNN) has contributed to substantial improvements in the EEG signal classification performance. However, there is still a key challenge with the existing CNN-based EEG signal classification methods, the accuracy of them is not very satisfying. This is because most of the existing methods only utilize the feature maps in the last layer of CNN for EEG signal classification, which might miss some local and detailed information for accurate classification. To address this challenge, this paper proposes a multi-scale CNN model-based EEG signal classification method. In this method, first, the EEG signals are preprocessed and converted to time-frequency images using the short-time Fourier Transform (STFT) technique. Then, a multi-scale CNN model is designed for EEG signal classification, which takes the converted time-frequency image as the input. Especially, in the designed multi-scale CNN model, both the local and global information is taken into consideration. The performance of the proposed method is verified on the benchmark data set 2b used in the BCI contest IV. The experimental results show that the average accuracy of the proposed method is 73.9 percent, which improves the classification accuracy of 10.4, 5.5, 16.2 percent compared with the traditional methods including artificial neural network, support vector machine, and stacked auto-encoder.
Collapse
|
36
|
Aldayel M, Ykhlef M, Al-Nafjan A. Consumers’ Preference Recognition Based on Brain–Computer Interfaces: Advances, Trends, and Applications. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05695-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
37
|
Automatic subject-specific spatiotemporal feature selection for subject-independent affective BCI. PLoS One 2021; 16:e0253383. [PMID: 34437542 PMCID: PMC8389489 DOI: 10.1371/journal.pone.0253383] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 06/04/2021] [Indexed: 11/20/2022] Open
Abstract
The dimensionality of the spatially distributed channels and the temporal resolution of electroencephalogram (EEG) based brain-computer interfaces (BCI) undermine emotion recognition models. Thus, prior to modeling such data, as the final stage of the learning pipeline, adequate preprocessing, transforming, and extracting temporal (i.e., time-series signals) and spatial (i.e., electrode channels) features are essential phases to recognize underlying human emotions. Conventionally, inter-subject variations are dealt with by avoiding the sources of variation (e.g., outliers) or turning the problem into a subject-deponent. We address this issue by preserving and learning from individual particularities in response to affective stimuli. This paper investigates and proposes a subject-independent emotion recognition framework that mitigates the subject-to-subject variability in such systems. Using an unsupervised feature selection algorithm, we reduce the feature space that is extracted from time-series signals. For the spatial features, we propose a subject-specific unsupervised learning algorithm that learns from inter-channel co-activation online. We tested this framework on real EEG benchmarks, namely DEAP, MAHNOB-HCI, and DREAMER. We train and test the selection outcomes using nested cross-validation and a support vector machine (SVM). We compared our results with the state-of-the-art subject-independent algorithms. Our results show an enhanced performance by accurately classifying human affection (i.e., based on valence and arousal) by 16%–27% compared to other studies. This work not only outperforms other subject-independent studies reported in the literature but also proposes an online analysis solution to affection recognition.
Collapse
|
38
|
Mridha MF, Das SC, Kabir MM, Lima AA, Islam MR, Watanobe Y. Brain-Computer Interface: Advancement and Challenges. SENSORS 2021; 21:s21175746. [PMID: 34502636 PMCID: PMC8433803 DOI: 10.3390/s21175746] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 08/15/2021] [Accepted: 08/20/2021] [Indexed: 02/04/2023]
Abstract
Brain-Computer Interface (BCI) is an advanced and multidisciplinary active research domain based on neuroscience, signal processing, biomedical sensors, hardware, etc. Since the last decades, several groundbreaking research has been conducted in this domain. Still, no comprehensive review that covers the BCI domain completely has been conducted yet. Hence, a comprehensive overview of the BCI domain is presented in this study. This study covers several applications of BCI and upholds the significance of this domain. Then, each element of BCI systems, including techniques, datasets, feature extraction methods, evaluation measurement matrices, existing BCI algorithms, and classifiers, are explained concisely. In addition, a brief overview of the technologies or hardware, mostly sensors used in BCI, is appended. Finally, the paper investigates several unsolved challenges of the BCI and explains them with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Sujoy Chandra Das
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
- Correspondence:
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan;
| |
Collapse
|
39
|
Tan X, Guo C, Jiang T, Fu K, Zhou N, Yuan J, Zhang G. A new semi-supervised algorithm combined with MCICA optimizing SVM for motion imagination EEG classification. INTELL DATA ANAL 2021. [DOI: 10.3233/ida-205188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
This paper proposed a new semi-supervised algorithm combined with Mutual-cross Imperial Competition Algorithm (MCICA) optimizing Support Vector Machine (SVM) for motion imagination EEG classification, which not only reduces the tedious and time-consuming training process and enhances the adaptability of Brain Computer Interface (BCI), but also utilizes the MCICA to optimize the parameters of SVM in the semi-supervised process. This algorithm combines mutual information and cross validation to construct objective function in the semi-supervised training process, and uses the constructed objective function to establish the semi-supervised model of MCICA for optimizing the parameters of SVM, and finally applies the selected optimal parameters to the data set Iva of 2005 BCI competition to verify its effectiveness. The results showed that the proposed algorithm is effective in optimizing parameters and has good robustness and generalization in solving small sample classification problems.
Collapse
Affiliation(s)
- Xuemin Tan
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Chao Guo
- State Grid Chengdu Power Supply Company, Chengdu, Sichuan, China
| | - Tao Jiang
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Kechang Fu
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Nan Zhou
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Jianying Yuan
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Guoliang Zhang
- College of Control Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| |
Collapse
|
40
|
Ranjan R, Chandra Sahana B, Kumar Bhandari A. Ocular artifact elimination from electroencephalography signals: A systematic review. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.06.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
41
|
EEG data augmentation for emotion recognition with a multiple generator conditional Wasserstein GAN. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00336-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractEEG-based emotion recognition has attracted substantial attention from researchers due to its extensive application prospects, and substantial progress has been made in feature extraction and classification modelling from EEG data. However, insufficient high-quality training data are available for building EEG-based emotion recognition models via machine learning or deep learning methods. The artificial generation of high-quality data is an effective approach for overcoming this problem. In this paper, a multi-generator conditional Wasserstein GAN method is proposed for the generation of high-quality artificial that covers a more comprehensive distribution of real data through the use of various generators. Experimental results demonstrate that the artificial data that are generated by the proposed model can effectively improve the performance of emotion classification models that are based on EEG.
Collapse
|
42
|
Asgher U, Khan MJ, Asif Nizami MH, Khalil K, Ahmad R, Ayaz Y, Naseer N. Motor Training Using Mental Workload (MWL) With an Assistive Soft Exoskeleton System: A Functional Near-Infrared Spectroscopy (fNIRS) Study for Brain-Machine Interface (BMI). Front Neurorobot 2021; 15:605751. [PMID: 33815084 PMCID: PMC8012849 DOI: 10.3389/fnbot.2021.605751] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Accepted: 02/05/2021] [Indexed: 11/24/2022] Open
Abstract
Mental workload is a neuroergonomic human factor, which is widely used in planning a system's safety and areas like brain-machine interface (BMI), neurofeedback, and assistive technologies. Robotic prosthetics methodologies are employed for assisting hemiplegic patients in performing routine activities. Assistive technologies' design and operation are required to have an easy interface with the brain with fewer protocols, in an attempt to optimize mobility and autonomy. The possible answer to these design questions may lie in neuroergonomics coupled with BMI systems. In this study, two human factors are addressed: designing a lightweight wearable robotic exoskeleton hand that is used to assist the potential stroke patients with an integrated portable brain interface using mental workload (MWL) signals acquired with portable functional near-infrared spectroscopy (fNIRS) system. The system may generate command signals for operating a wearable robotic exoskeleton hand using two-state MWL signals. The fNIRS system is used to record optical signals in the form of change in concentration of oxy and deoxygenated hemoglobin (HbO and HbR) from the pre-frontal cortex (PFC) region of the brain. Fifteen participants participated in this study and were given hand-grasping tasks. Two-state MWL signals acquired from the PFC region of the participant's brain are segregated using machine learning classifier-support vector machines (SVM) to utilize in operating a robotic exoskeleton hand. The maximum classification accuracy is 91.31%, using a combination of mean-slope features with an average information transfer rate (ITR) of 1.43. These results show the feasibility of a two-state MWL (fNIRS-based) robotic exoskeleton hand (BMI system) for hemiplegic patients assisting in the physical grasping tasks.
Collapse
Affiliation(s)
- Umer Asgher
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Muhammad Jawad Khan
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Muhammad Hamza Asif Nizami
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- Florida State University College of Engineering, Florida A&M University, Tallahassee, FL, United States
| | - Khurram Khalil
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Riaz Ahmad
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- Directorate of Quality Assurance and International Collaboration, National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Yasar Ayaz
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- National Center of Artificial Intelligence (NCAI), National University of Sciences and Technology, Islamabad, Pakistan
| | - Noman Naseer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| |
Collapse
|
43
|
Martínez-Tejada LA, Puertas-González A, Yoshimura N, Koike Y. Exploring EEG Characteristics to Identify Emotional Reactions under Videogame Scenarios. Brain Sci 2021; 11:brainsci11030378. [PMID: 33809797 PMCID: PMC8002589 DOI: 10.3390/brainsci11030378] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 03/10/2021] [Accepted: 03/12/2021] [Indexed: 11/28/2022] Open
Abstract
In this article we present the study of electroencephalography (EEG) traits for emotion recognition process using a videogame as a stimuli tool, and considering two different kind of information related to emotions: arousal–valence self-assesses answers from participants, and game events that represented positive and negative emotional experiences under the videogame context. We performed a statistical analysis using Spearman’s correlation between the EEG traits and the emotional information. We found that EEG traits had strong correlation with arousal and valence scores; also, common EEG traits with strong correlations, belonged to the theta band of the central channels. Then, we implemented a regression algorithm with feature selection to predict arousal and valence scores using EEG traits. We achieved better result for arousal regression, than for valence regression. EEG traits selected for arousal and valence regression belonged to time domain (standard deviation, complexity, mobility, kurtosis, skewness), and frequency domain (power spectral density—PDS, and differential entropy—DE from theta, alpha, beta, gamma, and all EEG frequency spectrum). Addressing game events, we found that EEG traits related with the theta, alpha and beta band had strong correlations. In addition, distinctive event-related potentials where identified in the presence of both types of game events. Finally, we implemented a classification algorithm to discriminate between positive and negative events using EEG traits to identify emotional information. We obtained good classification performance using only two traits related with frequency domain on the theta band and on the full EEG spectrum.
Collapse
Affiliation(s)
- Laura Alejandra Martínez-Tejada
- FIRST Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (N.Y.); (Y.K.)
- Correspondence:
| | - Alex Puertas-González
- System Engineering and Computation School, Universidad Pedagógica y Tecnológica de Colombia, Santiago de Tunja 150007, Colombia;
| | - Natsue Yoshimura
- FIRST Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (N.Y.); (Y.K.)
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo 187-8551, Japan
- PRESTO, JST, Kawaguchi, Saitama 332-0012, Japan
- Neural Information Analysis Laboratories, ATR, Kyoto 619-0288, Japan
| | - Yasuharu Koike
- FIRST Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa 226-8503, Japan; (N.Y.); (Y.K.)
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo 187-8551, Japan
| |
Collapse
|
44
|
A Comparative Study of Window Size and Channel Arrangement on EEG-Emotion Recognition Using Deep CNN. SENSORS 2021; 21:s21051678. [PMID: 33804366 PMCID: PMC7957771 DOI: 10.3390/s21051678] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 12/31/2022]
Abstract
Emotion recognition based on electroencephalograms has become an active research area. Yet, identifying emotions using only brainwaves is still very challenging, especially the subject-independent task. Numerous studies have tried to propose methods to recognize emotions, including machine learning techniques like convolutional neural network (CNN). Since CNN has shown its potential in generalization to unseen subjects, manipulating CNN hyperparameters like the window size and electrode order might be beneficial. To our knowledge, this is the first work that extensively observed the parameter selection effect on the CNN. The temporal information in distinct window sizes was found to significantly affect the recognition performance, and CNN was found to be more responsive to changing window sizes than the support vector machine. Classifying the arousal achieved the best performance with a window size of ten seconds, obtaining 56.85% accuracy and a Matthews correlation coefficient (MCC) of 0.1369. Valence recognition had the best performance with a window length of eight seconds at 73.34% accuracy and an MCC value of 0.4669. Spatial information from varying the electrode orders had a small effect on the classification. Overall, valence results had a much more superior performance than arousal results, which were, perhaps, influenced by features related to brain activity asymmetry between the left and right hemispheres.
Collapse
|
45
|
Cheng J, Chen M, Li C, Liu Y, Song R, Liu A, Chen X. Emotion Recognition From Multi-Channel EEG via Deep Forest. IEEE J Biomed Health Inform 2021; 25:453-464. [PMID: 32750905 DOI: 10.1109/jbhi.2020.2995767] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Recently, deep neural networks (DNNs) have been applied to emotion recognition tasks based on electroencephalography (EEG), and have achieved better performance than traditional algorithms. However, DNNs still have the disadvantages of too many hyperparameters and lots of training data. To overcome these shortcomings, in this article, we propose a method for multi-channel EEG-based emotion recognition using deep forest. First, we consider the effect of baseline signal to preprocess the raw artifact-eliminated EEG signal with baseline removal. Secondly, we construct 2 D frame sequences by taking the spatial position relationship across channels into account. Finally, 2 D frame sequences are input into the classification model constructed by deep forest that can mine the spatial and temporal information of EEG signals to classify EEG emotions. The proposed method can eliminate the need for feature extraction in traditional methods and the classification model is insensitive to hyperparameter settings, which greatly reduce the complexity of emotion recognition. To verify the feasibility of the proposed model, experiments were conducted on two public DEAP and DREAMER databases. On the DEAP database, the average accuracies reach to 97.69% and 97.53% for valence and arousal, respectively; on the DREAMER database, the average accuracies reach to 89.03%, 90.41%, and 89.89% for valence, arousal and dominance, respectively. These results show that the proposed method exhibits higher accuracy than the state-of-art methods.
Collapse
|
46
|
Aldayel M, Ykhlef M, Al-Nafjan A. Recognition of Consumer Preference by Analysis and Classification EEG Signals. Front Hum Neurosci 2021; 14:604639. [PMID: 33519402 PMCID: PMC7838383 DOI: 10.3389/fnhum.2020.604639] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 11/23/2020] [Indexed: 12/03/2022] Open
Abstract
Neuromarketing has gained attention to bridge the gap between conventional marketing studies and electroencephalography (EEG)-based brain-computer interface (BCI) research. It determines what customers actually want through preference prediction. The performance of EEG-based preference detection systems depends on a suitable selection of feature extraction techniques and machine learning algorithms. In this study, We examined preference detection of neuromarketing dataset using different feature combinations of EEG indices and different algorithms for feature extraction and classification. For EEG feature extraction, we employed discrete wavelet transform (DWT) and power spectral density (PSD), which were utilized to measure the EEG-based preference indices that enhance the accuracy of preference detection. Moreover, we compared deep learning with other traditional classifiers, such as k-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF). We also studied the effect of preference indicators on the performance of classification algorithms. Through rigorous offline analysis, we investigated the computational intelligence for preference detection and classification. The performance of the proposed deep neural network (DNN) outperforms KNN and SVM in accuracy, precision, and recall; however, RF achieved results similar to those of the DNN for the same dataset.
Collapse
Affiliation(s)
- Mashael Aldayel
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia.,Information System Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Mourad Ykhlef
- Information System Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Abeer Al-Nafjan
- Computer Science Department, College of Computer and Information Sciences, Imam Muhammad ibn Saud Islamic University, Riyadh, Saudi Arabia
| |
Collapse
|
47
|
Li W, Huan W, Hou B, Tian Y, Zhang Z, Song A. Can Emotion be Transferred? – A Review on Transfer Learning for EEG-Based Emotion Recognition. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3098842] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
48
|
Spezialetti M, Placidi G, Rossi S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front Robot AI 2020; 7:532279. [PMID: 33501307 PMCID: PMC7806093 DOI: 10.3389/frobt.2020.532279] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 09/18/2020] [Indexed: 12/11/2022] Open
Abstract
A fascinating challenge in the field of human-robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human-machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human-robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.
Collapse
Affiliation(s)
- Matteo Spezialetti
- PRISCA (Intelligent Robotics and Advanced Cognitive System Projects) Laboratory, Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, Naples, Italy
- Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, L'Aquila, Italy
| | - Giuseppe Placidi
- AVI (Acquisition, Analysis, Visualization & Imaging Laboratory) Laboratory, Department of Life, Health and Environmental Sciences (MESVA), University of L'Aquila, L'Aquila, Italy
| | - Silvia Rossi
- PRISCA (Intelligent Robotics and Advanced Cognitive System Projects) Laboratory, Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, Naples, Italy
| |
Collapse
|
49
|
Fraschini M, Meli M, Demuru M, Didaci L, Barberini L. EEG Fingerprints under Naturalistic Viewing Using a Portable Device. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6565. [PMID: 33212929 PMCID: PMC7698321 DOI: 10.3390/s20226565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 11/13/2020] [Accepted: 11/16/2020] [Indexed: 06/11/2023]
Abstract
The electroencephalogram (EEG) has been proven to be a promising technique for personal identification and verification. Recently, the aperiodic component of the power spectrum was shown to outperform other commonly used EEG features. Beyond that, EEG characteristics may capture relevant features related to emotional states. In this work, we aim to understand if the aperiodic component of the power spectrum, as shown for resting-state experimental paradigms, is able to capture EEG-based subject-specific features in a naturalistic stimuli scenario. In order to answer this question, we performed an analysis using two freely available datasets containing EEG recordings from participants during viewing of film clips that aim to trigger different emotional states. Our study confirms that the aperiodic components of the power spectrum, as evaluated in terms of offset and exponent parameters, are able to detect subject-specific features extracted from the scalp EEG. In particular, our results show that the performance of the system was significantly higher for the film clip scenario if compared with resting-state, thus suggesting that under naturalistic stimuli it is even easier to identify a subject. As a consequence, we suggest a paradigm shift, from task-based or resting-state to naturalistic stimuli, when assessing the performance of EEG-based biometric systems.
Collapse
Affiliation(s)
- Matteo Fraschini
- Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy; (M.M.); (L.D.)
| | - Miro Meli
- Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy; (M.M.); (L.D.)
| | - Matteo Demuru
- Stichting Epilepsie Instellingen Nederland (SEIN), 2103SW Heemstede, The Netherlands;
| | - Luca Didaci
- Department of Electrical and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy; (M.M.); (L.D.)
| | - Luigi Barberini
- Department of Medical Sciences and Public Health, University of Cagliari, 09123 Cagliari, Italy;
| |
Collapse
|
50
|
Puszta A, Pertich Á, Giricz Z, Nyujtó D, Bodosi B, Eördegh G, Nagy A. Predicting Stimulus Modality and Working Memory Load During Visual- and Audiovisual-Acquired Equivalence Learning. Front Hum Neurosci 2020; 14:569142. [PMID: 33132883 PMCID: PMC7578848 DOI: 10.3389/fnhum.2020.569142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 09/01/2020] [Indexed: 11/13/2022] Open
Abstract
Scholars have extensively studied the electroencephalography (EEG) correlates of associative working memory (WM) load. However, the effect of stimulus modality on EEG patterns within this process is less understood. To fill this research gap, the present study re-analyzed EEG datasets recorded during visual and audiovisual equivalence learning tasks from earlier studies. The number of associations required to be maintained (WM load) in WM was increased using the staircase method during the acquisition phase of the tasks. The support vector machine algorithm was employed to predict WM load and stimulus modality using the power, phase connectivity, and cross-frequency coupling (CFC) values obtained during time segments with different WM loads in the visual and audiovisual tasks. A high accuracy (>90%) in predicting stimulus modality based on power spectral density and from the theta-beta CFC was observed. However, accuracy in predicting WM load was higher (≥75% accuracy) than that in predicting stimulus modality (which was at chance level) using theta and alpha phase connectivity. Under low WM load conditions, this connectivity was highest between the frontal and parieto-occipital channels. The results validated our findings from earlier studies that dissociated stimulus modality based on power-spectra and CFC during equivalence learning. Furthermore, the results emphasized the importance of alpha and theta frontoparietal connectivity in WM load.
Collapse
Affiliation(s)
- András Puszta
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway.,Department of Psychology, Faculty of Social Sciences, University of Oslo, Oslo, Norway.,Department of Physiology, University of Szeged, Szeged, Hungary
| | - Ákos Pertich
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Zsófia Giricz
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Diána Nyujtó
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Balázs Bodosi
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Gabriella Eördegh
- Faculty of Health Sciences and Social Studies, University of Szeged, Szeged, Hungary
| | - Attila Nagy
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| |
Collapse
|