1
|
Šola HM, Khawaja S, Qureshi FH. Neuroscientific Analysis of Logo Design: Implications for Luxury Brand Marketing. Behav Sci (Basel) 2025; 15:502. [PMID: 40282124 PMCID: PMC12024241 DOI: 10.3390/bs15040502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2025] [Revised: 04/03/2025] [Accepted: 04/05/2025] [Indexed: 04/29/2025] Open
Abstract
This study examines the influence of dynamic and verbal elements in logo design on consumer behaviour in the luxury retail sector using advanced neuroscience technology (Predict v.1.0) and traditional cognitive survey methods. AI-powered eye tracking (n = 255,000), EEG technology (n = 45,000), implicit testing (n = 9000), and memory testing (n = 7000) were used to predict human behaviour. Qualitative cognitive surveys (n = 297), saliency map analysis, and emotional response evaluation were employed to analyse three distinct logo designs. The results indicate that logos with prominent dynamic elements, particularly visually distinct icons, demonstrate superior performance in capturing and maintaining viewer attention compared with static designs. A strong correlation was found between cognitive demand and engagement, suggesting that dynamic elements enhance emotional connections and brand recall. However, the effectiveness of dynamic features varied, with more pronounced elements yielding better results for industry associations and premium market alignment. This study, combining advanced neuroscience technology with traditional cognitive survey methods, makes significant contributions to the field and opens up new avenues for research and application. The findings provide valuable insights for luxury brand managers in optimising logo designs to enhance emotional connection and brand perception and improve academia by providing powerful tools for understanding and predicting human responses to visual stimuli.
Collapse
Affiliation(s)
- Hedda Martina Šola
- Oxford Business College (SK Research), Macclesfield House, New Road, Oxford OX1 1BY, UK; (S.K.); (F.H.Q.)
- Institute for Neuromarketing & Intellectual Property, Jurja Ves III spur no 4, 10000 Zagreb, Croatia
| | - Sarwar Khawaja
- Oxford Business College (SK Research), Macclesfield House, New Road, Oxford OX1 1BY, UK; (S.K.); (F.H.Q.)
| | - Fayyaz Hussain Qureshi
- Oxford Business College (SK Research), Macclesfield House, New Road, Oxford OX1 1BY, UK; (S.K.); (F.H.Q.)
| |
Collapse
|
2
|
Dekleva BM, Collinger JL. Using transient, effector-specific neural responses to gate decoding for brain-computer interfaces. J Neural Eng 2025; 22:016036. [PMID: 39808922 DOI: 10.1088/1741-2552/adaa1f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 01/14/2025] [Indexed: 01/16/2025]
Abstract
Objective.Real-world implementation of brain-computer interfaces (BCIs) for continuous control of devices should ideally rely on fully asynchronous decoding approaches. That is, the decoding algorithm should continuously update its output by estimating the user's intended actions from real-time neural activity, without the need for any temporal alignment to an external cue. This kind of open-ended temporal flexibility is necessary to achieve naturalistic and intuitive control. However, the relation between cortical activity and behavior is not stationary: neural responses that appear related to a certain aspect of behavior (e.g. grasp force) in one context will exhibit a relationship to something else in another context (e.g. reach speed). This presents a challenge for generalizable decoding, since the applicability of a decoder for a given parameter changes over time.Approach.We developed a method to simplify the problem of continuous decoding that uses transient, end effector-specific neural responses to identify periods of relevant effector engagement. Specifically, we use transient responses in the population response observed at the onset and offset of all hand-related actions to signal the applicability of hand-related feature decoders (e.g. digit movement or force). By using this transient-based gating approach, specific feature decoding models can be simpler (owing to local linearities) and are less sensitive to interference from cross-effector interference such as combined reaching and grasping actions.Main results.The transient-based decoding approach enabled high-quality online decoding of grasp force and individual finger control in multiple behavioral paradigms. The benefits of the gated approach are most evident in tasks that require both hand and arm control, for which standard continuous decoding approaches exhibit high output variability.Significance.The approach proposed here addresses the challenge of decoder generalization across contexts. By limiting decoding to identified periods of effector engagement, this approach can support reliable BCI control in real-world applications.Clinical Trial ID: NCT01894802.
Collapse
Affiliation(s)
- Brian M Dekleva
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, United States of America
- Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, PA, United States of America
| | - Jennifer L Collinger
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, United States of America
- Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, United States of America
- Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, PA, United States of America
- Biomedical Engineering Department, Carnegie Mellon University, Pittsburgh, PA, United States of America
| |
Collapse
|
3
|
Hameed I, Khan DM, Ahmed SM, Aftab SS, Fazal H. Enhancing motor imagery EEG signal decoding through machine learning: A systematic review of recent progress. Comput Biol Med 2025; 185:109534. [PMID: 39672015 DOI: 10.1016/j.compbiomed.2024.109534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 11/08/2024] [Accepted: 12/03/2024] [Indexed: 12/15/2024]
Abstract
This systematic literature review explores the intersection of neuroscience and deep learning in the context of decoding motor imagery Electroencephalogram (EEG) signals to enhance the quality of life for individuals with motor disabilities. Currently, the most used non-invasive method for measuring brain activity is the EEG, due to its high temporal resolution, user-friendliness, and safety. A Brain Computer Interface (BCI) framework can be made using these signals which can provide a new communication channel to people that are suffering from motor disabilities or other neurological disorders. However, implementing EEG-based BCI systems in real-world scenarios for motor imagery recognition presents challenges, primarily due to the inherent variability among individuals and low signal-to-noise ratio (SNR) of EEG signals. To assist researchers in navigating this complex problem, a comprehensive review article is presented, summarizing the key findings from relevant studies since 2017. This review primarily focuses on the datasets, preprocessing methods, feature extraction techniques, and deep learning models employed by various researchers. This review aims to contribute valuable insights and serve as a resource for researchers, practitioners, and enthusiasts interested in the combination of neuroscience and deep learning, ultimately hoping to contribute to advancements that bridge the gap between the human mind and machine interfaces.
Collapse
Affiliation(s)
- Ibtehaaj Hameed
- Department of Telecommunications Engineering, NED University of Engineering and Technology, Karachi, Pakistan
| | - Danish M Khan
- Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Petaling Jaya, Selangor, 47500, Malaysia.
| | - Syed Muneeb Ahmed
- Department of Telecommunications Engineering, NED University of Engineering and Technology, Karachi, Pakistan
| | - Syed Sabeeh Aftab
- Department of Telecommunications Engineering, NED University of Engineering and Technology, Karachi, Pakistan
| | - Hammad Fazal
- Department of Telecommunications Engineering, NED University of Engineering and Technology, Karachi, Pakistan
| |
Collapse
|
4
|
Zhu L, Wang Y, Huang A, Tan X, Zhang J. A multi-branch, multi-scale, and multi-view CNN with lightweight temporal attention mechanism for EEG-based motor imagery decoding. Comput Methods Biomech Biomed Engin 2025:1-15. [PMID: 39760422 DOI: 10.1080/10255842.2024.2448576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 12/09/2024] [Accepted: 12/26/2024] [Indexed: 01/07/2025]
Abstract
Convolutional neural networks (CNNs) have been widely utilized for decoding motor imagery (MI) from electroencephalogram (EEG) signals. However, extracting discriminative spatial-temporal-spectral features from low signal-to-noise ratio EEG signals remains challenging. This paper proposes MBMSNet , a multi-branch, multi-scale, and multi-view CNN with a lightweight temporal attention mechanism for EEG-Based MI decoding. Specifically, MBMSNet first extracts multi-view representations from raw EEG signals, followed by independent branches to capture spatial, spectral, temporal-spatial, and temporal-spectral features. Each branch includes a domain-specific convolutional layer, a variance layer, and a temporal attention layer. Finally, the features derived from each branch are concatenated with weights and classified through a fully connected layer. Experiments demonstrate MBMSNet outperforms state-of-the-art models, achieving accuracies of 84.60% on BCI Competition IV 2a, 87.80% on 2b, and 74.58% on OpenBMI, showcasing its potential for robust BCI applications.
Collapse
Affiliation(s)
- Lei Zhu
- The School of Automation, Hangzhou Dianzi University, Hangzhou, China
| | - Yunsheng Wang
- The School of Automation, Hangzhou Dianzi University, Hangzhou, China
| | - Aiai Huang
- The School of Automation, Hangzhou Dianzi University, Hangzhou, China
| | - Xufei Tan
- The School of Medicine, Hangzhou City University, Hangzhou, China
| | - Jianhai Zhang
- The School of Computer Science, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
5
|
An S, Kim S, Chikontwe P, Park SH. Dual Attention Relation Network With Fine-Tuning for Few-Shot EEG Motor Imagery Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:15479-15493. [PMID: 37379192 DOI: 10.1109/tnnls.2023.3287181] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Recently, motor imagery (MI) electroencephalography (EEG) classification techniques using deep learning have shown improved performance over conventional techniques. However, improving the classification accuracy on unseen subjects is still challenging due to intersubject variability, scarcity of labeled unseen subject data, and low signal-to-noise ratio (SNR). In this context, we propose a novel two-way few-shot network able to efficiently learn how to learn representative features of unseen subject categories and classify them with limited MI EEG data. The pipeline includes an embedding module that learns feature representations from a set of signals, a temporal-attention module to emphasize important temporal features, an aggregation-attention module for key support signal discovery, and a relation module for final classification based on relation scores between a support set and a query signal. In addition to the unified learning of feature similarity and a few-shot classifier, our method can emphasize informative features in support data relevant to the query, which generalizes better on unseen subjects. Furthermore, we propose to fine-tune the model before testing by arbitrarily sampling a query signal from the provided support set to adapt to the distribution of the unseen subject. We evaluate our proposed method with three different embedding modules on cross-subject and cross-dataset classification tasks using brain-computer interface (BCI) competition IV 2a, 2b, and GIST datasets. Extensive experiments show that our model significantly improves over the baselines and outperforms existing few-shot approaches.
Collapse
|
6
|
Wimpff M, Gizzi L, Zerfowski J, Yang B. EEG motor imagery decoding: a framework for comparative analysis with channel attention mechanisms. J Neural Eng 2024; 21:036020. [PMID: 38718788 DOI: 10.1088/1741-2552/ad48b9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 05/08/2024] [Indexed: 05/21/2024]
Abstract
Objective.The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact.Approach.We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms.Results.Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture.Significance.Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for electroencephalogram motor imagery decoding within BCIs.
Collapse
Affiliation(s)
- Martin Wimpff
- Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany
| | - Leonardo Gizzi
- Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Stuttgart, Germany
| | - Jan Zerfowski
- Clinical Neurotechnology Laboratory, Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Bin Yang
- Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany
| |
Collapse
|
7
|
Deng H, Li M, Li J, Guo M, Xu G. A robust multi-branch multi-attention-mechanism EEGNet for motor imagery BCI decoding. J Neurosci Methods 2024; 405:110108. [PMID: 38458260 DOI: 10.1016/j.jneumeth.2024.110108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 02/28/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024]
Abstract
BACKGROUND Motor-Imagery-based Brain-Computer Interface (MI-BCI) is a promising technology to assist communication, movement, and neurological rehabilitation for motor-impaired individuals. Electroencephalography (EEG) decoding techniques using deep learning (DL) possess noteworthy advantages due to automatic feature extraction and end-to-end learning. However, the DL-based EEG decoding models tend to show large variations due to intersubject variability of EEG, which results from inconsistencies of different subjects' optimal hyperparameters. NEW METHODS This study proposes a multi-branch multi-attention mechanism EEGNet model (MBMANet) for robust decoding. It applies the multi-branch EEGNet structure to achieve various feature extractions. Further, the different attention mechanisms introduced in each branch attain diverse adaptive weight adjustments. This combination of multi-branch and multi-attention mechanisms allows for multi-level feature fusion to provide robust decoding for different subjects. RESULTS The MBMANet model has a four-classification accuracy of 83.18% and kappa of 0.776 on the BCI Competition IV-2a dataset, which outperforms other eight CNN-based decoding models. This consistently satisfactory performance across all nine subjects indicates that the proposed model is robust. CONCLUSIONS The combine of multi-branch and multi-attention mechanisms empowers the DL-based models to adaptively learn different EEG features, which provides a feasible solution for dealing with data variability. It also gives the MBMANet model more accurate decoding of motion intentions and lower training costs, thus improving the MI-BCI's utility and robustness.
Collapse
Affiliation(s)
- Haodong Deng
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Mengfan Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China.
| | - Jundi Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Miaomiao Guo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| |
Collapse
|
8
|
Li W, Li H, Sun X, Kang H, An S, Wang G, Gao Z. Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition. J Neural Eng 2024; 21:026038. [PMID: 38565100 DOI: 10.1088/1741-2552/ad3986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 03/20/2024] [Indexed: 04/04/2024]
Abstract
Objective. The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals.Approach. To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network's ability to extract deep features from original signals without relying on the true labels of the data.Main results. To evaluate our framework's efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32%, 82.34%, and 81.13%on the three datasets, demonstrating superior performance compared to existing methods.Significance. Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.
Collapse
Affiliation(s)
- Wenjie Li
- Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Haoyu Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xinlin Sun
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Huicong Kang
- Department of Neurology, Shanxi Bethune Hospital, Shanxi Academy of Medical Science, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan 030000, People's Republic of China
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430000, People's Republic of China
| | - Shan An
- JD Health International Inc., Beijing 100176, People's Republic of China
| | - Guoxin Wang
- JD Health International Inc., Beijing 100176, People's Republic of China
| | - Zhongke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
9
|
Xu J, Li D, Zhou P, Li C, Wang Z, Tong S. A multi-band centroid contrastive reconstruction fusion network for motor imagery electroencephalogram signal decoding. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20624-20647. [PMID: 38124568 DOI: 10.3934/mbe.2023912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Motor imagery (MI) brain-computer interface (BCI) assist users in establishing direct communication between their brain and external devices by decoding the movement intention of human electroencephalogram (EEG) signals. However, cerebral cortical potentials are highly rhythmic and sub-band features, different experimental situations and subjects have different categories of semantic information in specific sample target spaces. Feature fusion can lead to more discriminative features, but simple fusion of features from different embedding spaces leading to the model global loss is not easily convergent and ignores the complementarity of features. Considering the similarity and category contribution of different sub-band features, we propose a multi-band centroid contrastive reconstruction fusion network (MB-CCRF). We obtain multi-band spatio-temporal features by frequency division, preserving the task-related rhythmic features of different EEG signals; use a multi-stream cross-layer connected convolutional network to perform a deep feature representation for each sub-band separately; propose a centroid contrastive reconstruction fusion module, which maps different sub-band and category features into the same shared embedding space by comparing with category prototypes, reconstructing the feature semantic structure to ensure that the global loss of the fused features converges more easily. Finally, we use a learning mechanism to model the similarity between channel features and use it as the weight of fused sub-band features, thus enhancing the more discriminative features, suppressing the useless features. The experimental accuracy is 79.96% in the BCI competition Ⅳ-Ⅱa dataset. Moreover, the classification effect of sub-band features of different subjects is verified by comparison tests, the category propensity of different sub-band features is verified by confusion matrix tests and the distribution in different classes of each sub-band feature and fused feature are showed by visual analysis, revealing the importance of different sub-band features for the EEG-based MI classification task.
Collapse
Affiliation(s)
- Jiacan Xu
- The College of Engineering Training and Innovation, Shenyang Jianzhu University, Shenyang 110000, China
| | - Donglin Li
- The College of Electrical Engineering, Shenyang University of Technology, Shenyang 110000, China
| | - Peng Zhou
- The College of Engineering Training and Innovation, Shenyang Jianzhu University, Shenyang 110000, China
| | - Chunsheng Li
- The College of Electrical Engineering, Shenyang University of Technology, Shenyang 110000, China
| | - Zinan Wang
- The College of Engineering Training and Innovation, Shenyang Jianzhu University, Shenyang 110000, China
| | - Shenghao Tong
- The College of Engineering Training and Innovation, Shenyang Jianzhu University, Shenyang 110000, China
| |
Collapse
|
10
|
Bi J, Chu M. TDLNet: Transfer Data Learning Network for Cross-Subject Classification Based on Multiclass Upper Limb Motor Imagery EEG. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3958-3967. [PMID: 37815969 DOI: 10.1109/tnsre.2023.3323509] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
The limited number of brain-computer interface based on motor imagery (MI-BCI) instruction sets for different movements of single limbs makes it difficult to meet practical application requirements. Therefore, designing a single-limb, multi-category motor imagery (MI) paradigm and effectively decoding it is one of the important research directions in the future development of MI-BCI. Furthermore, one of the major challenges in MI-BCI is the difficulty of classifying brain activity across different individuals. In this article, the transfer data learning network (TDLNet) is proposed to achieve the cross-subject intention recognition for multiclass upper limb motor imagery. In TDLNet, the Transfer Data Module (TDM) is used to process cross-subject electroencephalogram (EEG) signals in groups and then fuse cross-subject channel features through two one-dimensional convolutions. The Residual Attention Mechanism Module (RAMM) assigns weights to each EEG signal channel and dynamically focuses on the EEG signal channels most relevant to a specific task. Additionally, a feature visualization algorithm based on occlusion signal frequency is proposed to qualitatively analyze the proposed TDLNet. The experimental results show that TDLNet achieves the best classification results on two datasets compared to CNN-based reference methods and transfer learning method. In the 6-class scenario, TDLNet obtained an accuracy of 65%±0.05 on the UML6 dataset and 63%±0.06 on the GRAZ dataset. The visualization results demonstrate that the proposed framework can produce distinct classifier patterns for multiple categories of upper limb motor imagery through signals of different frequencies. The ULM6 dataset is available at https://dx.doi.org/10.21227/8qw6-f578.
Collapse
|
11
|
Zhang R, Liu G, Wen Y, Zhou W. Self-attention-based convolutional neural network and time-frequency common spatial pattern for enhanced motor imagery classification. J Neurosci Methods 2023; 398:109953. [PMID: 37611877 DOI: 10.1016/j.jneumeth.2023.109953] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/20/2023] [Accepted: 08/19/2023] [Indexed: 08/25/2023]
Abstract
BACKGROUND Motor imagery (MI) based brain-computer interfaces (BCIs) have promising potentials in the field of neuro-rehabilitation. However, due to individual variations in active brain regions during MI tasks, the challenge of decoding MI EEG signals necessitates improved classification performance for practical application. NEW METHOD This study proposes a self-attention-based Convolutional Neural Network (CNN) in conjunction with a time-frequency common spatial pattern (TFCSP) for enhanced MI classification. Due to the limited availability of training data, a data augmentation strategy is employed to expand the scale of MI EEG datasets. The self-attention-based CNN is trained to automatically extract the temporal and spatial information from EEG signals, allowing the self-attention module to select active channels by calculating EEG channel weights. TFCSP is further implemented to extract multiscale time-frequency-space features from EEG data. Finally, the EEG features derived from TFCSP are concatenated with those from the self-attention-based CNN for MI classification. RESULTS The proposed method is evaluated on two publicly accessible datasets, BCI Competition IV IIa and BCI Competition III IIIa, yielding mean accuracies of 79.28 % and 86.39 %, respectively. CONCLUSIONS Compared with state-of-the-art methods, our approach achieves superior classification results in accuracy. Self-attention-based CNN combining with TFCSP can make full use of the time-frequency-space information of EEG, and enhance the classification performance.
Collapse
Affiliation(s)
- Rui Zhang
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Guoyang Liu
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Yiming Wen
- School of Microelectronics, Shandong University, Jinan 250100, China
| | - Weidong Zhou
- School of Microelectronics, Shandong University, Jinan 250100, China.
| |
Collapse
|
12
|
Zhang Y, Qiu S, He H. Multimodal motor imagery decoding method based on temporal spatial feature alignment and fusion. J Neural Eng 2023; 20. [PMID: 36854181 DOI: 10.1088/1741-2552/acbfdf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 02/28/2023] [Indexed: 03/02/2023]
Abstract
Objective. A motor imagery-based brain-computer interface (MI-BCI) translates spontaneous movement intention from the brain to outside devices. Multimodal MI-BCI that uses multiple neural signals contains rich common and complementary information and is promising for enhancing the decoding accuracy of MI-BCI. However, the heterogeneity of different modalities makes the multimodal decoding task difficult. How to effectively utilize multimodal information remains to be further studied.Approach. In this study, a multimodal MI decoding neural network was proposed. Spatial feature alignment losses were designed to enhance the feature representations extracted from the heterogeneous data and guide the fusion of features from different modalities. An attention-based modality fusion module was built to align and fuse the features in the temporal dimension. To evaluate the proposed decoding method, a five-class MI electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) dataset were constructed.Main results and significance. The comparison experimental results showed that the proposed decoding method achieved higher decoding accuracy than the compared methods on both the self-collected dataset and a public dataset. The ablation results verified the effectiveness of each part of the proposed method. Feature distribution visualization results showed that the proposed losses enhance the feature representation of EEG and fNIRS modalities. The proposed method based on EEG and fNIRS modalities has significant potential for improving decoding performance of MI tasks.
Collapse
Affiliation(s)
- Yukun Zhang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Shuang Qiu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Huiguang He
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| |
Collapse
|
13
|
Sheng J, Xu J, Li H, Liu Z, Zhou H, You Y, Song T, Zuo G. A Multi-Scale Temporal Convolutional Network with Attention Mechanism for Force Level Classification during Motor Imagery of Unilateral Upper-Limb Movements. ENTROPY (BASEL, SWITZERLAND) 2023; 25:464. [PMID: 36981352 PMCID: PMC10048057 DOI: 10.3390/e25030464] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/04/2023] [Accepted: 03/05/2023] [Indexed: 06/18/2023]
Abstract
In motor imagery (MI) brain-computer interface (BCI) research, some researchers have designed MI paradigms of force under a unilateral upper-limb static state. It is difficult to apply these paradigms to the dynamic force interaction process between the robot and the patient in a brain-controlled rehabilitation robot system, which needs to induce thinking states of the patient's demand for assistance. Therefore, in our research, according to the movement of wiping the table in human daily life, we designed a three-level-force MI paradigm under a unilateral upper-limb dynamic state. Based on the event-related de-synchronization (ERD) feature analysis of the electroencephalography (EEG) signals generated by the brain's force change motor imagination, we proposed a multi-scale temporal convolutional network with attention mechanism (MSTCN-AM) algorithm to recognize ERD features of MI-EEG signals. Aiming at the slight feature differences of single-trial MI-EEG signals among different levels of force, the MSTCN module was designed to extract fine-grained features of different dimensions in the time-frequency domain. The spatial convolution module was then used to learn the area differences of space domain features. Finally, the attention mechanism dynamically weighted the time-frequency-space domain features to improve the algorithm's sensitivity. The results showed that the accuracy of the algorithm was 86.4 ± 14.0% for the three-level-force MI-EEG data collected experimentally. Compared with the baseline algorithms (OVR-CSP+SVM (77.6 ± 14.5%), Deep ConvNet (75.3 ± 12.3%), Shallow ConvNet (77.6 ± 11.8%), EEGNet (82.3 ± 13.8%), and SCNN-BiLSTM (69.1 ± 16.8%)), our algorithm had higher classification accuracy with significant differences and better fitting performance.
Collapse
Affiliation(s)
- Junpeng Sheng
- Faculty of Information Science and Technology, Ningbo University, Ningbo 315211, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
| | - Jialin Xu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Li
- Faculty of Information Science and Technology, Ningbo University, Ningbo 315211, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
| | - Zhen Liu
- Faculty of Information Science and Technology, Ningbo University, Ningbo 315211, China
| | - Huilin Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
| | - Yimeng You
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
| | - Tao Song
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
| | - Guokun Zuo
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315300, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
14
|
Phunruangsakao C, Achanccaray D, Izumi SI, Hayashibe M. Multibranch convolutional neural network with contrastive representation learning for decoding same limb motor imagery tasks. Front Hum Neurosci 2022; 16:1032724. [PMID: 36583011 PMCID: PMC9792600 DOI: 10.3389/fnhum.2022.1032724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/28/2022] [Indexed: 12/14/2022] Open
Abstract
Introduction Emerging deep learning approaches to decode motor imagery (MI) tasks have significantly boosted the performance of brain-computer interfaces. Although recent studies have produced satisfactory results in decoding MI tasks of different body parts, the classification of such tasks within the same limb remains challenging due to the activation of overlapping brain regions. A single deep learning model may be insufficient to effectively learn discriminative features among tasks. Methods The present study proposes a framework to enhance the decoding of multiple hand-MI tasks from the same limb using a multi-branch convolutional neural network. The CNN framework utilizes feature extractors from established deep learning models, as well as contrastive representation learning, to derive meaningful feature representations for classification. Results The experimental results suggest that the proposed method outperforms several state-of-the-art methods by obtaining a classification accuracy of 62.98% with six MI classes and 76.15 % with four MI classes on the Tohoku University MI-BCI and BCI Competition IV datasets IIa, respectively. Discussion Despite requiring heavy data augmentation and multiple optimization steps, resulting in a relatively long training time, this scheme is still suitable for online use. However, the trade-of between the number of base learners, training time, prediction time, and system performance should be carefully considered.
Collapse
Affiliation(s)
- Chatrin Phunruangsakao
- Neuro-Robotics Laboratory, Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan,*Correspondence: Chatrin Phunruangsakao
| | - David Achanccaray
- Presence Media Research Group, Hiroshi Ishiguro Laboratory, Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Shin-Ichi Izumi
- Department of Physical Medicine and Rehabilitation, Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Mitsuhiro Hayashibe
- Neuro-Robotics Laboratory, Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan,Department of Robotics, Graduate School of Engineering, Tohoku University, Sendai, Japan
| |
Collapse
|