1
|
Wu Y, Tao C, Li Q. Fatigue Characterization of EEG Brain Networks Under Mixed Reality Stereo Vision. Brain Sci 2024; 14:1126. [PMID: 39595889 PMCID: PMC11591834 DOI: 10.3390/brainsci14111126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2024] [Revised: 10/31/2024] [Accepted: 11/05/2024] [Indexed: 11/28/2024] Open
Abstract
Mixed Reality (MR) technology possesses profound and extensive potential across a multitude of domains, including, but not limited to industry, healthcare, and education. However, prolonged use of MR devices to watch stereoscopic content may lead to visual fatigue. Since visual fatigue involves multiple brain regions, our study aims to explore the topological characteristics of brain networks derived from electroencephalogram (EEG) data. Because the Phase-Locked Value (PLV) is capable of effectively measuring the phase synchronization relationship between brain regions, it was calculated between all pairs of channels in both comfort and fatigue states. Subsequently, a sparse brain network was constructed based on PLV by applying an appropriate threshold. The node properties (betweenness centrality, clustering coefficient, node efficiency) and edge properties (characteristic path length) were calculated based on the corresponding brain network within specific frequency bands for both comfort and fatigue states. In analyzing the PLV of brain connectivity in comfort and fatigue states, a notable enhancement in brain connectivity is observed within the alpha, theta, and delta frequency bands during fatigue status. By analyzing the node and edge properties of brain networks, it is evident that the mean values of these properties in the fatigue state were higher than those in the comfort state. By analyzing the node and edge properties at a local level, the average difference in betweenness centrality, clustering coefficients, and nodal efficiency across the three EEG frequency bands was computed to find significant brain regions. The main findings are as follows: Betweenness centrality primarily differs in frontal and parietal regions, with minor involvement in temporal and central regions. The clustering Coefficient mainly varies in the frontal region, with slight differences being seen in the temporal and occipital regions. Nodal efficiency primarily varies in the frontal, temporal, and central regions, with minor differences being seen in the parietal and occipital regions. Edge property analysis indicates that there is a higher occurrence of long-distance connections among brain regions during the fatigue state, which reflects a loss of synaptic transmission efficiency on a global level. Our study plays a crucial role in understanding the neural mechanisms underlying visual fatigue, potentially providing insights that could be applied to high-demand cognitive fields where prolonged use of MR devices leads to visual fatigue.
Collapse
Affiliation(s)
- Yan Wu
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China; (Y.W.); (C.T.)
- Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun 130022, China
- Laboratory of Brain Information and Neural Rehabilitation Engineering, Zhongshan Research Institute, Changchun University of Science and Technology, Zhongshan 528437, China
| | - Chunguang Tao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China; (Y.W.); (C.T.)
| | - Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China; (Y.W.); (C.T.)
- Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun 130022, China
- Laboratory of Brain Information and Neural Rehabilitation Engineering, Zhongshan Research Institute, Changchun University of Science and Technology, Zhongshan 528437, China
| |
Collapse
|
2
|
Gao C, Huang H, Zhan J, Li W, Li Y, Li J, Zhou J, Wang Y, Jiang Z, Chen W, Zhu Y, Zhuo Y, Wu K. Adaptive Changes in Neurovascular Properties With Binocular Accommodation Functions in Myopic Participants by 3D Visual Training: An EEG and fNIRS Study. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2749-2758. [PMID: 39074027 DOI: 10.1109/tnsre.2024.3434492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Although three-dimensional visual training (3DVT) has been used for myopia intervention, its neural mechanisms remain largely unknown. In this study, visual function was examined before and after 3DVT, while resting-state EEG-fNIRS signals were recorded from 38 myopic participants. A graph theoretical analysis was applied to compute the neurovascular properties, including static brain networks (SBNs), dynamic brain networks (DBNs), and dynamic neurovascular coupling (DNC). Correlations between the changes in neurovascular properties and the changes in visual functions were calculated. After 3DVT, the local efficiency and node efficiency in the frontal lobes increased in the SBNs constructed from EEG δ -band; the global efficiency and node efficiency in the frontal-parietal lobes decreased in the DBNs variability constructed from EEG δ -band. For the DNC constructed with EEG α -band and oxyhemoglobin (HbO), the local efficiency decreased, for EEG α -band and deoxyhemoglobin (HbR), the node efficiency in the frontal-occipital lobes decreased. For the SBNs constructed from HbO, the functional connectivity (FC) between the frontal-occipital lobes increased. The DNC constructed between the FC of the frontal-parietal lobes from EEG β -band and the FC of the frontal-occipital lobes from HbO increased, and between the FC of the frontal-occipital lobes from EEG β -band and the FC of the inter-frontal lobes from HbR increased. The neurovascular properties were significantly correlated with the amplitude of accommodation and accommodative facility. The result indicated the positive effects of 3DVT on myopic participants, including improved efficiency of brain networks, increased FC of SBNs and DNC, and enhanced binocular accommodation functions.
Collapse
|
3
|
Zhang C, Su L, Li S, Fu Y. Differential Brain Activation for Four Emotions in VR-2D and VR-3D Modes. Brain Sci 2024; 14:326. [PMID: 38671977 PMCID: PMC11048237 DOI: 10.3390/brainsci14040326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/10/2024] [Accepted: 03/25/2024] [Indexed: 04/28/2024] Open
Abstract
Similar to traditional imaging, virtual reality (VR) imagery encompasses nonstereoscopic (VR-2D) and stereoscopic (VR-3D) modes. Currently, Russell's emotional model has been extensively studied in traditional 2D and VR-3D modes, but there is limited comparative research between VR-2D and VR-3D modes. In this study, we investigate whether Russell's emotional model exhibits stronger brain activation states in VR-3D mode compared to VR-2D mode. By designing an experiment covering four emotional categories (high arousal-high pleasure (HAHV), high arousal-low pleasure (HALV), low arousal-low pleasure (LALV), and low arousal-high pleasure (LAHV)), EEG signals were collected from 30 healthy undergraduate and graduate students while watching videos in both VR modes. Initially, power spectral density (PSD) computations revealed distinct brain activation patterns in different emotional states across the two modes, with VR-3D videos inducing significantly higher brainwave energy, primarily in the frontal, temporal, and occipital regions. Subsequently, Differential entropy (DE) feature sets, selected via a dual ten-fold cross-validation Support Vector Machine (SVM) classifier, demonstrate satisfactory classification accuracy, particularly superior in the VR-3D mode. The paper subsequently presents a deep learning-based EEG emotion recognition framework, adeptly utilizing the frequency, spatial, and temporal information of EEG data to improve recognition accuracy. The contribution of each individual feature to the prediction probabilities is discussed through machine-learning interpretability based on Shapley values. The study reveals notable differences in brain activation states for identical emotions between the two modes, with VR-3D mode showing more pronounced activation.
Collapse
Affiliation(s)
| | - Lei Su
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; (C.Z.); (S.L.); (Y.F.)
| | | | | |
Collapse
|
4
|
Zhao X, Chen J, Chen T, Liu Y, Wang S, Zeng X, Yan J, Liu G. Micro-Expression Recognition Based on Nodal Efficiency in the EEG Functional Networks. IEEE Trans Neural Syst Rehabil Eng 2024; 32:887-894. [PMID: 38190663 DOI: 10.1109/tnsre.2023.3347601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Micro-expression recognition based on ima- ges has made some progress, yet limitations persist. For instance, image-based recognition of micro-expressions is affected by factors such as ambient light, changes in head posture, and facial occlusion. The high temporal resolution of electroencephalogram (EEG) technology can record brain activity associated with micro-expressions and identify them objectively from a neurophysiological standpoint. Accordingly, this study introduces a novel method for recognizing micro-expressions using node efficiency features of brain networks derived from EEG signals. We designed a real-time Supervision and Emotional Expression Suppression (SEES) experimental paradigm to collect video and EEG data reflecting micro- and macro-expression states from 70 participants experiencing positive emotions. By constructing functional brain networks based on graph theory, we analyzed the network efficiencies at both macro- and micro-levels. The participants exhibited lower connection density, global efficiency, and nodal efficiency in the alpha, beta, and gamma networks during micro-expressions compared to macro-expressions. We then selected the optimal subset of nodal efficiency features using a random forest algorithm and applied them to various classifiers, including Support Vector Machine (SVM), Gradient-Boosted Decision Tree (GBDT), Logistic Regression (LR), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). These classifiers achieved promising accuracy in micro-expression recognition, with SVM exhibiting the highest accuracy of 92.6% when 15 channels were selected. This study provides a new neuroscientific indicator for recognizing micro-expressions based on EEG signals, thereby broadening the potential applications for micro-expression recognition.
Collapse
|
5
|
Xie J, Lan P, Wang S, Luo Y, Liu G. Brain Activation Differences of Six Basic Emotions Between 2D Screen and Virtual Reality Modalities. IEEE Trans Neural Syst Rehabil Eng 2023; 31:700-709. [PMID: 37015689 DOI: 10.1109/tnsre.2022.3229389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
To our knowledge, it has been widely studied in Screen-2D modality for the six basic emotions proposed by Professor Paul Ekman, but there are only studies on their positive and negative valence in VR-3D modality. In this study, we will investigate whether the six basic emotions have stronger brain activation states in VR-3D modality than in Screen-2D modality. We designed an emotion-inducing experiment with six basic emotions (happiness, surprise, sadness, fear, anger, and disgust) to record the electroencephalogram (EEG) signals during watching VR-3D and Screen-2D videos. The power spectral density (PSD) was calculated to compare the brain activation differences between VR-3D and Screen-2D modalities during the induction of the six basic emotions. The results of statistical analysis of the relative power differences between VR-3D and Screen-2D modalities for each emotion revealed that both happiness and surprise presented greater differences in the $\alpha $ and $\gamma $ frequency bands, while sad, fear, disgust and anger all presented greater differences in the $\alpha $ and $\theta $ frequency bands, which are mainly observed in the frontal and occipital regions. On the other hand, the six emotions all yielded satisfactory classification accuracy (above 85%) by classification from a subset of power feature of the brain activation states in the same emotion between the two modalities. Overall, there are significant differences in the induction of same discrete emotions in VR-3D and Screen-2D modalities, with greater brain activation in VR-3D modalities. These findings provide a better understanding about the neural activity of discrete emotional tasks assessed in VR environments.
Collapse
|
6
|
Zhao X, Chen J, Chen T, Wang S, Liu Y, Zeng X, Liu G. Responses of functional brain networks in micro-expressions: An EEG study. Front Psychol 2022; 13:996905. [DOI: 10.3389/fpsyg.2022.996905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/04/2022] [Indexed: 11/13/2022] Open
Abstract
Micro-expressions (MEs) can reflect an individual’s subjective emotions and true mental state, and they are widely used in the fields of mental health, justice, law enforcement, intelligence, and security. However, one of the major challenges of working with MEs is that their neural mechanism is not entirely understood. To the best of our knowledge, the present study is the first to use electroencephalography (EEG) to investigate the reorganizations of functional brain networks involved in MEs. We aimed to reveal the underlying neural mechanisms that can provide electrophysiological indicators for ME recognition. A real-time supervision and emotional expression suppression experimental paradigm was designed to collect video and EEG data of MEs and no expressions (NEs) of 70 participants expressing positive emotions. Based on the graph theory, we analyzed the efficiency of functional brain network at the scalp level on both macro and micro scales. The results revealed that in the presence of MEs compared with NEs, the participants exhibited higher global efficiency and nodal efficiency in the frontal, occipital, and temporal regions. Additionally, using the random forest algorithm to select a subset of functional connectivity features as input, the support vector machine classifier achieved a classification accuracy for MEs and NEs of 0.81, with an area under the curve of 0.85. This finding demonstrates the possibility of using EEG to recognize MEs, with a wide range of application scenarios, such as persons wearing face masks or patients with expression disorders.
Collapse
|
7
|
Daşdemir Y. Cognitive investigation on the effect of augmented reality-based reading on emotion classification performance: A new dataset. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Yu M, Xiao S, Tian F, Li Y. Frontal-occipital network alterations while viewing 2D & 3D movies: a source-level EEG and graph theory approach. BIOMED ENG-BIOMED TE 2022; 67:161-172. [PMID: 35576610 DOI: 10.1515/bmt-2021-0300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 04/21/2022] [Indexed: 11/15/2022]
Abstract
Many researchers have measured the differences in electroencephalography (EEG) while viewing 2D and 3D movies to uncover the neuromechanism underlying distinct viewing experiences. Using whole-brain network analyses of scalp EEG, our previous study reported that beta and gamma bands presented higher global efficiencies while viewing 3D movies. However, scalp EEG is influenced by volume conduction, not allowing inference from a neuroanatomy perspective; thus, source reconstruction techniques are recommended. This paper is the first to measure the differences in the frontal-occipital networks in EEG source space during 2D and 3D movie viewing. EEG recordings from 40 subjects were performed during 2D and 3D movie viewing. We constructed frontal-occipital networks of alpha, beta, and gamma bands in EEG source space and analyzed network efficiencies. We found that the beta band exhibited higher global efficiency in 3D movie viewing than in 2D movie viewing; however, the alpha global efficiency was not statistically significant. In addition, a support vector machine (SVM) classifier, taking functional connectivities as classification features, was built to identify whether the frontal-occipital networks contain patterns that could distinguish 2D and 3D movie viewing. Using the 6 most important functional connectivity features of the beta band, we obtained the best accuracy of 0.933. Our findings shed light on uncovering the neuromechanism underlying distinct experiences while viewing 2D and 3D movies.
Collapse
Affiliation(s)
- Minchang Yu
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Shasha Xiao
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Feng Tian
- Shanghai Film Academy, Shanghai University, Shanghai, China
| | - Yingjie Li
- School of Life Sciences, College of International Education, Institute of Biomedical Engineering, Shanghai University, Shanghai, China
| |
Collapse
|
9
|
Where Is My Mind (Looking at)? A Study of the EEG–Visual Attention Relationship. INFORMATICS 2022. [DOI: 10.3390/informatics9010026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Visual attention estimation is an active field of research at the crossroads of different disciplines: computer vision, deep learning, and medicine. One of the most common approaches to estimate a saliency map representing attention is based on the observed images. In this paper, we show that visual attention can be retrieved from EEG acquisition. The results are comparable to traditional predictions from observed images, which is of great interest. Image-based saliency estimation being participant independent, the estimation from EEG could take into account the subject specificity. For this purpose, a set of signals has been recorded, and different models have been developed to study the relationship between visual attention and brain activity. The results are encouraging and comparable with other approaches estimating attention with other modalities. Being able to predict a visual saliency map from EEG could help in research studying the relationship between brain activity and visual attention. It could also help in various applications: vigilance assessment during driving, neuromarketing, and also in the help for the diagnosis and treatment of visual attention-related diseases. For the sake of reproducibility, the codes and dataset considered in this paper have been made publicly available to promote research in the field.
Collapse
|
10
|
Yu M, Xiao S, Hua M, Wang H, Chen X, Tian F, Li Y. EEG-based emotion recognition in an immersive virtual reality environment: From local activity to brain network features. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103349] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|