1
|
He X, Li Y, Xiao X, Li Y, Fang J, Zhou R. Multi-level cognitive state classification of learners using complex brain networks and interpretable machine learning. Cogn Neurodyn 2025; 19:5. [PMID: 39758356 PMCID: PMC11699182 DOI: 10.1007/s11571-024-10203-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 10/06/2024] [Accepted: 10/13/2024] [Indexed: 01/07/2025] Open
Abstract
Identifying the cognitive state can help educators understand the evolving thought processes of learners, and it is important in promoting the development of higher-order thinking skills (HOTS). Cognitive neuroscience research identifies cognitive states by designing experimental tasks and recording electroencephalography (EEG) signals during task performance. However, most of the previous studies primarily concentrated on extracting features from individual channels in single-type tasks, ignoring the interconnection across channels. In this study, three learning activities (i.e., video watching activity, keyword extracting activity, and essay creating activity) were designed based on a revised Bloom's taxonomy and the Interactive-Constructive-Active-Passive framework and used with 31 college students. The EEG signals were recorded when they were engaged in these activities. First, whole-brain network temporal dynamics were characterized by EEG microstate sequence analysis. Such dynamic changes rely on learning activity and corresponding functional brain systems. Subsequently, phase locking value was used to construct synchrony-based functional brain networks. The network characteristics were extracted to be inputted into different machine learning classifiers: Support Vector Machine, K-Nearest Neighbour, Random Forest, and eXtreme Gradient Boosting (XGBoost). XGBoost showed superior performance in the classification of cognitive states, with an accuracy of 88.07%. Furthermore, SHapley Additive exPlanations (SHAP) was adopted to reveal the connections between different brain regions that contributed to the classification of cognitive state. SHAP analysis reveals that the connections in the frontal, temporal, and central regions are most important for the high cognitive state. Collectively, this study may provide further evidence for educators to design cognitive-guided instructional activities to enhance learners' HOTS.
Collapse
Affiliation(s)
- Xiuling He
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| | - Yue Li
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| | - Xiong Xiao
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| | - Yingting Li
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| | - Jing Fang
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| | - Ruijie Zhou
- National Engineering Research Center of Educational Big Data, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
- National Engineering Research Center for E-Learning, Central China Normal University, Luoyu Road, Wuhan, 430079 Hubei China
| |
Collapse
|
2
|
He J, Huang Z, Li Y, Shi J, Chen Y, Jiang C, Feng J. Single-channel attention classification algorithm based on robust Kalman filtering and norm-constrained ELM. Front Hum Neurosci 2025; 18:1481493. [PMID: 39850073 PMCID: PMC11755414 DOI: 10.3389/fnhum.2024.1481493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 12/12/2024] [Indexed: 01/25/2025] Open
Abstract
Introduction Attention classification based on EEG signals is crucial for brain-computer interface (BCI) applications. However, noise interference and real-time signal fluctuations hinder accuracy, especially in portable single-channel devices. This study proposes a robust Kalman filtering method combined with a norm-constrained extreme learning machine (ELM) to address these challenges. Methods The proposed method integrates Discrete Wavelet Transformation (DWT) and Independent Component Analysis (ICA) for noise removal, followed by a robust Kalman filter enhanced with convex optimization to preserve critical EEG components. The norm-constrained ELM employs L1/L2 regularization to improve generalization and classification performance. Experimental data were collected using a Schulte Grid paradigm and TGAM sensors, along with publicly available datasets for validation. Results The robust Kalman filter demonstrated superior denoising performance, achieving an average AUC of 0.8167 and a maximum AUC of 0.8678 on self-collected datasets, and an average AUC of 0.8344 with a maximum of 0.8950 on public datasets. The method outperformed traditional Kalman filtering, LMS adaptive filtering, and TGAM's eSense algorithm in both noise reduction and attention classification accuracy. Discussion The study highlights the effectiveness of combining advanced signal processing and machine learning techniques to improve the robustness and generalization of EEG-based attention classification. Limitations include the small sample size and limited demographic diversity, suggesting future research should expand participant groups and explore broader applications, such as mental health monitoring and neurofeedback.
Collapse
Affiliation(s)
- Jing He
- School of Management, Guilin University of Aerospace Technology, Guilin, China
| | - Zijun Huang
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Yunde Li
- Biomedical and Artificial Intelligence Laboratory, Guilin University of Aerospace Technology, Guilin, China
| | - Jiangfeng Shi
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Yehang Chen
- Biomedical and Artificial Intelligence Laboratory, Guilin University of Aerospace Technology, Guilin, China
| | - Chengliang Jiang
- Biomedical and Artificial Intelligence Laboratory, Guilin University of Aerospace Technology, Guilin, China
| | - Jin Feng
- Student Affairs Office, Guilin Normal College, Guilin, China
| |
Collapse
|
3
|
Chen Y, Wang W, Yan S, Wang Y, Zheng X, Lv C. Application of Electroencephalography Sensors and Artificial Intelligence in Automated Language Teaching. SENSORS (BASEL, SWITZERLAND) 2024; 24:6969. [PMID: 39517865 PMCID: PMC11548684 DOI: 10.3390/s24216969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024]
Abstract
This study developed an automated language learning teaching assessment system based on electroencephalography (EEG) and differential language large models (LLMs), aimed at enhancing language instruction effectiveness by monitoring learners' cognitive states in real time and personalizing teaching content accordingly. Through detailed experimental design, the paper validated the system's application in various teaching tasks. The results indicate that the system exhibited high precision, recall, and accuracy in teaching effectiveness tests. Specifically, the method integrating differential LLMs with the EEG fusion module achieved a precision of 0.96, recall of 0.95, accuracy of 0.96, and an F1-score of 0.95, outperforming other automated teaching models. Additionally, ablation experiments further confirmed the critical role of the EEG fusion module in enhancing teaching quality and accuracy, providing valuable data support and theoretical basis for future improvements in teaching methods and system design.
Collapse
Affiliation(s)
| | | | | | | | | | - Chunli Lv
- China Agricultural University, Beijing 100083, China; (Y.C.); (W.W.); (S.Y.); (Y.W.); (X.Z.)
| |
Collapse
|
4
|
Rueda-Castro V, Azofeifa JD, Chacon J, Caratozzolo P. Bridging minds and machines in Industry 5.0: neurobiological approach. Front Hum Neurosci 2024; 18:1427512. [PMID: 39257699 PMCID: PMC11384584 DOI: 10.3389/fnhum.2024.1427512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 08/09/2024] [Indexed: 09/12/2024] Open
Abstract
Introduction In transitioning from Industry 4.0 to the forthcoming Industry 5.0, this research explores the fusion of the humanistic view and technological developments to redefine Continuing Engineering Education (CEE). Industry 5.0 introduces concepts like biomanufacturing and human-centricity, embodying the integration of sustainability and resiliency principles in CEE, thereby shaping the upskilling and reskilling initiatives for the future workforce. The interaction of sophisticated concepts such as Human-Machine Interface and Brain-Computer Interface (BCI) forms a conceptual bridge toward the approaching Fifth Industrial Revolution, allowing one to understand human beings and the impact of their biological development across diverse and changing workplace settings. Methods Our research is based on recent studies into Knowledge, Skills, and Abilities taxonomies, linking these elements with dynamic labor market profiles. This work intends to integrate a biometric perspective to conceptualize and describe how cognitive abilities could be represented by linking a Neuropsychological test and a biometric assessment. We administered the brief Neuropsychological Battery in Spanish (Neuropsi Breve). At the same time, 15 engineering students used the Emotiv insight device that allowed the EEG recollection to measure performance metrics such as attention, stress, engagement, and excitement. Results The findings of this research illustrate a methodology that allowed the first approach to the cognitive abilities of engineering students to be from neurobiological and behavioral perspectives. Additionally, two profiles were extracted from the results. The first illustrates the Neuropsi test areas, its most common mistakes, and its performance ratings regarding the students' sample. The second profile shows the interaction between the EEG and Neuropsi test, showing engineering students' cognitive and emotional states based on biometric levels. Discussions The study demonstrates the potential of integrating neurobiological assessment into engineering education, highlighting a significant advancement in addressing the skills requirements of Industry 5.0. The results suggest that obtaining a comprehensive understanding of students' cognitive abilities is possible, and educational interventions can be adapted by combining neuropsychological approaches with EEG data collection. In the future, it is essential to refine these evaluation methods further and explore their applicability in different engineering disciplines. Additionally, it is necessary to investigate the long-term impact of these methods on workforce preparation and performance.
Collapse
Affiliation(s)
| | - Jose Daniel Azofeifa
- Institute for the Future of Education, Tecnologico de Monterrey, Monterrey, Mexico
| | - Julian Chacon
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City, Mexico
| | - Patricia Caratozzolo
- Institute for the Future of Education, Tecnologico de Monterrey, Monterrey, Mexico
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City, Mexico
| |
Collapse
|
5
|
Kang H, Yang R, Song R, Yang C, Wang W. An Approach of Query Audience's Attention in Virtual Speech. SENSORS (BASEL, SWITZERLAND) 2024; 24:5363. [PMID: 39205057 PMCID: PMC11359125 DOI: 10.3390/s24165363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 09/04/2024]
Abstract
Virtual speeches are a very popular way for remote multi-user communication, but it has the disadvantage of the lack of eye contact. This paper proposes the evaluation of an online audience attention based on gaze tracking. Our research only uses webcams to capture the audience's head posture, gaze time, and other features, providing a low-cost method for attention monitoring with reference values across multiple domains. Meantime, we also propose a set of indexes which can be used to evaluate the audience's degree of attention, making up for the fact that the speaker cannot gauge the audience's concentration through eye contact during online speeches. We selected 96 students for a 20 min group simulation session and used Spearman's correlation coefficient to analyze the correlation between our evaluation indicators and concentration. The result showed that each evaluation index has a significant correlation with the degree of attention (p = 0.01), and all the students in the focused group met the thresholds set by each of our evaluation indicators, while the students in the non-focused group failed to reach the standard. During the simulation, eye movement data and EEG signals were measured synchronously for the second group of students. The EEG results of the students were consistent with the systematic evaluation. The performance of the measured EEG signals confirmed the accuracy of the systematic evaluation.
Collapse
Affiliation(s)
| | | | | | | | - Wenqing Wang
- School of Automation, Xi’an University of Posts and Telecommunications, Xi’an 710061, China; (H.K.); (R.Y.); (R.S.); (C.Y.)
| |
Collapse
|
6
|
Aldayel M, Al-Nafjan A. A comprehensive exploration of machine learning techniques for EEG-based anxiety detection. PeerJ Comput Sci 2024; 10:e1829. [PMID: 38435618 PMCID: PMC10909191 DOI: 10.7717/peerj-cs.1829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 12/29/2023] [Indexed: 03/05/2024]
Abstract
The performance of electroencephalogram (EEG)-based systems depends on the proper choice of feature extraction and machine learning algorithms. This study highlights the significance of selecting appropriate feature extraction and machine learning algorithms for EEG-based anxiety detection. We explored different annotation/labeling, feature extraction, and classification algorithms. Two measurements, the Hamilton anxiety rating scale (HAM-A) and self-assessment Manikin (SAM), were used to label anxiety states. For EEG feature extraction, we employed the discrete wavelet transform (DWT) and power spectral density (PSD). To improve the accuracy of anxiety detection, we compared ensemble learning methods such as random forest (RF), AdaBoost bagging, and gradient bagging with conventional classification algorithms including linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbor (KNN) classifiers. We also evaluated the performance of the classifiers using different labeling (SAM and HAM-A) and feature extraction algorithms (PSD and DWT). Our findings demonstrated that HAM-A labeling and DWT-based features consistently yielded superior results across all classifiers. Specifically, the RF classifier achieved the highest accuracy of 87.5%, followed by the Ada boost bagging classifier with an accuracy of 79%. The RF classifier outperformed other classifiers in terms of accuracy, precision, and recall.
Collapse
Affiliation(s)
- Mashael Aldayel
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Abeer Al-Nafjan
- Computer Science Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
7
|
Rizvi SMA, Buriro AB, Ahmed I, Memon AA. Analyzing neural activity under prolonged mask usage through EEG. Brain Res 2024; 1822:148624. [PMID: 37838190 DOI: 10.1016/j.brainres.2023.148624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/17/2023] [Accepted: 10/09/2023] [Indexed: 10/16/2023]
Abstract
In recent COVID times, mask has been a compulsion at workplaces and institutes as a preventive measure against multiple viral diseases including coronavirus (COVID-19) disease. However, the effects of prolonged mask-wearing on humans' neural activity are not well known. This paper is to investigate the effect of prolonged mask usage on the human brain through electroencephalogram (EEG), which acquires neural activity and translates it into comprehensible electrical signals. The performances of 10 human subjects with and without mask were assessed on a random patterned alphabet game. Besides EEG, physiological parameters of oxygen saturation, heart rate, blood pressure, and body temperature were recorded. Spectral and statistical analysis were performed on the recorded entities along with linear discriminant analysis (LDA) on extracted spectral features. The mean EEG spectral power in alpha, beta, and gamma sub-bands of the subjects with mask was smaller than the subjects without mask. The performances on the task and the oxygen saturation level between the two groups differed significantly (p < 0.05). Whereas, the blood pressure, body temperature, and heart rate of both groups were similar. Based on the LDA analysis, the occipital and frontal lobes exhibited the greatest variability in channel measurements, with O1 and O2 channels in the occipital lobe demonstrating significant variations within the alpha band due to visual focus, while the F3, AF3, and F7 channels were found to be differentiating within the beta and gamma frequency bands due to the cognitive stimulating tasks. All other channels were observed to be non-discriminatory.
Collapse
Affiliation(s)
| | - Abdul Baseer Buriro
- Department of Electrical Engineering, Sukkur IBA University, 65200 Sukkur, Pakistan
| | - Irfan Ahmed
- Department of Electrical Engineering, Sukkur IBA University, 65200 Sukkur, Pakistan; Department of Electrical and Electronics Engineering, City University, Hong Kong.
| | - Abdul Aziz Memon
- Department of Electrical Engineering, Sukkur IBA University, 65200 Sukkur, Pakistan
| |
Collapse
|
8
|
Mortier S, Turkeš R, De Winne J, Van Ransbeeck W, Botteldooren D, Devos P, Latré S, Leman M, Verdonck T. Classification of Targets and Distractors in an Audiovisual Attention Task Based on Electroencephalography. SENSORS (BASEL, SWITZERLAND) 2023; 23:9588. [PMID: 38067961 PMCID: PMC10708631 DOI: 10.3390/s23239588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 12/18/2023]
Abstract
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject's electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal-occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain-computer interfaces. Furthermore, they validate the EEG data collected in our experiment.
Collapse
Affiliation(s)
- Steven Mortier
- IDLab—Department of Computer Science, University of Antwerp—imec, Sint-Pietersvliet 7, 2000 Antwerp, Belgium; (R.T.); (S.L.)
| | - Renata Turkeš
- IDLab—Department of Computer Science, University of Antwerp—imec, Sint-Pietersvliet 7, 2000 Antwerp, Belgium; (R.T.); (S.L.)
| | - Jorg De Winne
- WAVES Research Group, Department of Information Technology, Ghent University, 4 Technologiepark 126, Zwijnaarde, 9052 Ghent, Belgium; (J.D.W.); (W.V.R.); (D.B.); (P.D.)
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, 9000 Ghent, Belgium;
| | - Wannes Van Ransbeeck
- WAVES Research Group, Department of Information Technology, Ghent University, 4 Technologiepark 126, Zwijnaarde, 9052 Ghent, Belgium; (J.D.W.); (W.V.R.); (D.B.); (P.D.)
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, 4 Technologiepark 126, Zwijnaarde, 9052 Ghent, Belgium; (J.D.W.); (W.V.R.); (D.B.); (P.D.)
| | - Paul Devos
- WAVES Research Group, Department of Information Technology, Ghent University, 4 Technologiepark 126, Zwijnaarde, 9052 Ghent, Belgium; (J.D.W.); (W.V.R.); (D.B.); (P.D.)
| | - Steven Latré
- IDLab—Department of Computer Science, University of Antwerp—imec, Sint-Pietersvliet 7, 2000 Antwerp, Belgium; (R.T.); (S.L.)
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, 9000 Ghent, Belgium;
| | - Tim Verdonck
- Department of Mathematics, University of Antwerp—imec, Middelheimlaan 1, 2000 Antwerp, Belgium;
| |
Collapse
|
9
|
Peksa J, Mamchur D. State-of-the-Art on Brain-Computer Interface Technology. SENSORS (BASEL, SWITZERLAND) 2023; 23:6001. [PMID: 37447849 DOI: 10.3390/s23136001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 06/23/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
This paper provides a comprehensive overview of the state-of-the-art in brain-computer interfaces (BCI). It begins by providing an introduction to BCIs, describing their main operation principles and most widely used platforms. The paper then examines the various components of a BCI system, such as hardware, software, and signal processing algorithms. Finally, it looks at current trends in research related to BCI use for medical, educational, and other purposes, as well as potential future applications of this technology. The paper concludes by highlighting some key challenges that still need to be addressed before widespread adoption can occur. By presenting an up-to-date assessment of the state-of-the-art in BCI technology, this paper will provide valuable insight into where this field is heading in terms of progress and innovation.
Collapse
Affiliation(s)
- Janis Peksa
- Department of Information Technologies, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia
- Institute of Information Technology, Riga Technical University, Kalku Street 1, LV-1658 Riga, Latvia
| | - Dmytro Mamchur
- Department of Information Technologies, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia
- Computer Engineering and Electronics Department, Kremenchuk Mykhailo Ostrohradskyi National University, Pershotravneva 20, 39600 Kremenchuk, Ukraine
| |
Collapse
|
10
|
EEG-Based Empathic Safe Cobot. MACHINES 2022. [DOI: 10.3390/machines10080603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
An empathic collaborative robot (cobot) was realized through the transmission of fear from a human agent to a robot agent. Such empathy was induced through an electroencephalographic (EEG) sensor worn by the human agent, thus realizing an empathic safe brain-computer interface (BCI). The empathic safe cobot reacts to the fear and in turn transmits it to the human agent, forming a social circle of empathy and safety. A first randomized, controlled experiment involved two groups of 50 healthy subjects (100 total subjects) to measure the EEG signal in the presence or absence of a frightening event. The second randomized, controlled experiment on two groups of 50 different healthy subjects (100 total subjects) exposed the subjects to comfortable and uncomfortable movements of a collaborative robot (cobot) while the subjects’ EEG signal was acquired. The result was that a spike in the subject’s EEG signal was observed in the presence of uncomfortable movement. The questionnaires were distributed to the subjects, and confirmed the results of the EEG signal measurement. In a controlled laboratory setting, all experiments were found to be statistically significant. In the first experiment, the peak EEG signal measured just after the activating event was greater than the resting EEG signal (p < 10−3). In the second experiment, the peak EEG signal measured just after the uncomfortable movement of the cobot was greater than the EEG signal measured under conditions of comfortable movement of the cobot (p < 10−3). In conclusion, within the isolated and constrained experimental environment, the results were satisfactory.
Collapse
|