1
|
Wu M, Ouyang R, Zhou C, Sun Z, Li F, Li P. A study on the combination of functional connection features and Riemannian manifold in EEG emotion recognition. Front Neurosci 2024; 17:1345770. [PMID: 38287990 PMCID: PMC10823003 DOI: 10.3389/fnins.2023.1345770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 12/26/2023] [Indexed: 01/31/2024] Open
Abstract
Introduction Affective computing is the core for Human-computer interface (HCI) to be more intelligent, where electroencephalogram (EEG) based emotion recognition is one of the primary research orientations. Besides, in the field of brain-computer interface, Riemannian manifold is a highly robust and effective method. However, the symmetric positive definiteness (SPD) of the features limits its application. Methods In the present work, we introduced the Laplace matrix to transform the functional connection features, i.e., phase locking value (PLV), Pearson correlation coefficient (PCC), spectral coherent (COH), and mutual information (MI), to into semi-positive, and the max operator to ensure the transformed feature be positive. Then the SPD network is employed to extract the deep spatial information and a fully connected layer is employed to validate the effectiveness of the extracted features. Particularly, the decision layer fusion strategy is utilized to achieve more accurate and stable recognition results, and the differences of classification performance of different feature combinations are studied. What's more, the optimal threshold value applied to the functional connection feature is also studied. Results The public emotional dataset, SEED, is adopted to test the proposed method with subject dependent cross-validation strategy. The result of average accuracies for the four features indicate that PCC outperform others three features. The proposed model achieve best accuracy of 91.05% for the fusion of PLV, PCC, and COH, followed by the fusion of all four features with the accuracy of 90.16%. Discussion The experimental results demonstrate that the optimal thresholds for the four functional connection features always kept relatively stable within a fixed interval. In conclusion, the experimental results demonstrated the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Minchao Wu
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
- Key Laboratory of Flight Techniques and Flight Safety, Civil Aviation Flight University of China, Guanghan, China
| | - Rui Ouyang
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Chang Zhou
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Zitong Sun
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Fan Li
- Key Laboratory of Flight Techniques and Flight Safety, Civil Aviation Flight University of China, Guanghan, China
| | - Ping Li
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| |
Collapse
|
2
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
3
|
Automatic Identification of Children with ADHD from EEG Brain Waves. SIGNALS 2023. [DOI: 10.3390/signals4010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Abstract
EEG (electroencephalogram) signals could be used reliably to extract critical information regarding ADHD (attention deficit hyperactivity disorder), a childhood neurodevelopmental disorder. The early detection of ADHD is important to lessen the development of this disorder and reduce its long-term impact. This study aimed to develop a computer algorithm to identify children with ADHD automatically from the characteristic brain waves. An EEG machine learning pipeline is presented here, including signal preprocessing and data preparation steps, with thorough explanations and rationale. A large public dataset of 120 children was selected, containing large variability and minimal measurement bias in data collection and reproducible child-friendly visual attentional tasks. Unlike other studies, EEG linear features were extracted to train a Gaussian SVM-based model from only the first four sub-bands of EEG. This eliminates signals more than 30 Hz, thus reducing the computational load for model training while keeping mean accuracy of ~94%. We also performed rigorous validation (obtained 93.2% and 94.2% accuracy, respectively, for holdout and 10-fold cross-validation) to ensure that the developed model is minimally impacted by bias and overfitting that commonly appear in the ML pipeline. These performance metrics indicate the ability to automatically identify children with ADHD from a local clinical setting and provide a baseline for further clinical evaluation and timely therapeutic attempts.
Collapse
|
4
|
A Novel Approach for Sleep Arousal Disorder Detection Based on the Interaction of Physiological Signals and Metaheuristic Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:9379618. [PMID: 36688224 PMCID: PMC9859692 DOI: 10.1155/2023/9379618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 01/15/2023]
Abstract
The vast majority of sleep disturbances are caused by various types of sleep arousal. To diagnose sleep disorders and prevent health problems such as cardiovascular disease and cognitive impairment, sleep arousals must be accurately detected. Consequently, sleep specialists must spend considerable time and effort analyzing polysomnography (PSG) recordings to determine the level of arousal during sleep. The development of an automated sleep arousal detection system based on PSG would considerably benefit clinicians. We quantify the EEG-ECG by using Lyapunov exponents, fractals, and wavelet transforms to identify sleep stages and arousal disorders. In this paper, an efficient hybrid-learning method is introduced for the first time to detect and assess arousal incidents. Modified drone squadron optimization (mDSO) algorithm is used to optimize the support vector machine (SVM) with radial basis function (RBF) kernel. EEG-ECG signals are preprocessed samples from the SHHS sleep dataset and the PhysioBank challenge 2018. In comparison to other traditional methods for identifying sleep disorders, our physiological signals correlation innovation is much better than similar approaches. Based on the proposed model, the average error rate was less than 2%-7%, respectively, for two-class and four-class issues. Additionally, the proper classification of the five sleep stages is determined to be accurate 92.3% of the time. In clinical trials of sleep disorders, the hybrid-learning model technique based on EEG-ECG signal correlation features is effective in detecting arousals.
Collapse
|
5
|
Ketola EC, Barankovich M, Schuckers S, Ray-Dowling A, Hou D, Imtiaz MH. Channel Reduction for an EEG-Based Authentication System While Performing Motor Movements. SENSORS (BASEL, SWITZERLAND) 2022; 22:9156. [PMID: 36501858 PMCID: PMC9740146 DOI: 10.3390/s22239156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/10/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Commercial use of biometric authentication is becoming increasingly popular, which has sparked the development of EEG-based authentication. To stimulate the brain and capture characteristic brain signals, these systems generally require the user to perform specific activities such as deeply concentrating on an image, mental activity, visual counting, etc. This study investigates whether effective authentication would be feasible for users tasked with a minimal daily activity such as lifting a tiny object. With this novel protocol, the minimum number of EEG electrodes (channels) with the highest performance (ranked) was identified to improve user comfort and acceptance over traditional 32-64 electrode-based EEG systems while also reducing the load of real-time data processing. For this proof of concept, a public dataset was employed, which contains 32 channels of EEG data from 12 participants performing a motor task without intent for authentication. The data was filtered into five frequency bands, and 12 different features were extracted to train a random forest-based machine learning model. All channels were ranked according to Gini Impurity. It was found that only 14 channels are required to perform authentication when EEG data is filtered into the Gamma sub-band within a 1% accuracy of using 32-channels. This analysis will allow (a) the design of a custom headset with 14 electrodes clustered over the frontal and occipital lobe of the brain, (b) a reduction in data collection difficulty while performing authentication, (c) minimizing dataset size to allow real-time authentication while maintaining reasonable performance, and (d) an API for use in ranking authentication performance in different headsets and tasks.
Collapse
|
6
|
EEG Emotion Classification Network Based on Attention Fusion of Multi-Channel Band Features. SENSORS 2022; 22:s22145252. [PMID: 35890933 PMCID: PMC9318779 DOI: 10.3390/s22145252] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 07/06/2022] [Accepted: 07/11/2022] [Indexed: 01/18/2023]
Abstract
Understanding learners’ emotions can help optimize instruction sand further conduct effective learning interventions. Most existing studies on student emotion recognition are based on multiple manifestations of external behavior, which do not fully use physiological signals. In this context, on the one hand, a learning emotion EEG dataset (LE-EEG) is constructed, which captures physiological signals reflecting the emotions of boredom, neutrality, and engagement during learning; on the other hand, an EEG emotion classification network based on attention fusion (ECN-AF) is proposed. To be specific, on the basis of key frequency bands and channels selection, multi-channel band features are first extracted (using a multi-channel backbone network) and then fused (using attention units). In order to verify the performance, the proposed model is tested on an open-access dataset SEED (N = 15) and the self-collected dataset LE-EEG (N = 45), respectively. The experimental results using five-fold cross validation show the following: (i) on the SEED dataset, the highest accuracy of 96.45% is achieved by the proposed model, demonstrating a slight increase of 1.37% compared to the baseline models; and (ii) on the LE-EEG dataset, the highest accuracy of 95.87% is achieved, demonstrating a 21.49% increase compared to the baseline models.
Collapse
|
7
|
Li R, Yuizono T, Li X. Affective computing of multi-type urban public spaces to analyze emotional quality using ensemble learning-based classification of multi-sensor data. PLoS One 2022; 17:e0269176. [PMID: 35657805 PMCID: PMC9165821 DOI: 10.1371/journal.pone.0269176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 05/15/2022] [Indexed: 11/18/2022] Open
Abstract
The quality of urban public spaces affects the emotional response of users; therefore, the emotional data of users can be used as indices to evaluate the quality of a space. Emotional response can be evaluated to effectively measure public space quality through affective computing and obtain evidence-based support for urban space renewal. We proposed a feasible evaluation method for multi-type urban public spaces based on multiple physiological signals and ensemble learning. We built binary, ternary, and quinary classification models based on participants’ physiological signals and self-reported emotional responses through experiments in eight public spaces of five types. Furthermore, we verified the effectiveness of the model by inputting data collected from two other public spaces. Three observations were made based on the results. First, the highest accuracies of the binary and ternary classification models were 92.59% and 91.07%, respectively. After external validation, the highest accuracies were 80.90% and 65.30%, respectively, which satisfied the preliminary requirements for evaluating the quality of actual urban spaces. However, the quinary classification model could not satisfy the preliminary requirements. Second, the average accuracy of ensemble learning was 7.59% higher than that of single classifiers. Third, reducing the number of physiological signal features and applying the synthetic minority oversampling technique to solve unbalanced data improved the evaluation ability.
Collapse
Affiliation(s)
- Ruixuan Li
- School of Art and Design, Dalian Polytechnic University, Dalian City, Liaoning Province, China
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
- * E-mail:
| | - Takaya Yuizono
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
| | - Xianghui Li
- School of Art and Design, Dalian Polytechnic University, Dalian City, Liaoning Province, China
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
| |
Collapse
|
8
|
Emotion Recognition Using a Reduced Set of EEG Channels Based on Holographic Feature Maps. SENSORS 2022; 22:s22093248. [PMID: 35590938 PMCID: PMC9101362 DOI: 10.3390/s22093248] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/17/2022] [Accepted: 04/22/2022] [Indexed: 02/06/2023]
Abstract
An important function of the construction of the Brain-Computer Interface (BCI) device is the development of a model that is able to recognize emotions from electroencephalogram (EEG) signals. Research in this area is very challenging because the EEG signal is non-stationary, non-linear, and contains a lot of noise due to artifacts caused by muscle activity and poor electrode contact. EEG signals are recorded with non-invasive wearable devices using a large number of electrodes, which increase the dimensionality and, thereby, also the computational complexity of EEG data. It also reduces the level of comfort of the subjects. This paper implements our holographic features, investigates electrode selection, and uses the most relevant channels to maximize model accuracy. The ReliefF and Neighborhood Component Analysis (NCA) methods were used to select the optimal electrodes. Verification was performed on four publicly available datasets. Our holographic feature maps were constructed using computer-generated holography (CGH) based on the values of signal characteristics displayed in space. The resulting 2D maps are the input to the Convolutional Neural Network (CNN), which serves as a feature extraction method. This methodology uses a reduced set of electrodes, which are different between men and women, and obtains state-of-the-art results in a three-dimensional emotional space. The experimental results show that the channel selection methods improve emotion recognition rates significantly with an accuracy of 90.76% for valence, 92.92% for arousal, and 92.97% for dominance.
Collapse
|
9
|
Guo H, Gao W. Metaverse-Powered Experiential Situational English-Teaching Design: An Emotion-Based Analysis Method. Front Psychol 2022; 13:859159. [PMID: 35401297 PMCID: PMC8987594 DOI: 10.3389/fpsyg.2022.859159] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Metaverse is to build a virtual world that is both mapped and independent of the real world in cyberspace by using the improvement in the maturity of various digital technologies, such as virtual reality (VR), augmented reality (AR), big data, and 5G, which is important for the future development of a wide variety of professions, including education. The metaverse represents the latest stage of the development of visual immersion technology. Its essence is an online digital space parallel to the real world, which is becoming a practical field for the innovation and development of human society. The most prominent advantage of the English-teaching metaverse is that it can provide an immersive and interactive teaching field for teachers and students, simultaneously meeting the teaching and learning needs of teachers and students in both the physical world and virtual world. This study constructs experiential situational English-teaching scenario and convolutional neural networks (CNNs)–recurrent neural networks (RNNs) fusion models are proposed to recognize students’ emotion electroencephalogram (EEG) in experiential English teaching during the feature space of time domain, frequency domain, and spatial domain. Analyzing EEG data collected by OpenBCI EEG Electrode Cap Kit from students, experiential English-teaching scenario is designed into three types: sequential guidance, comprehensive exploration, and crowd-creation construction. Experimental data analysis of the three kinds of learning activities shows that metaverse-powered experiential situational English teaching can promote the improvement of students’ sense of interactivity, immersion, and cognition, and the accuracy and analysis time of CNN–RNN fusion model is much higher than that of baselines. This study can provide a nice reference for the emotion recognition of students under COVID-19.
Collapse
Affiliation(s)
- Hongyu Guo
- School of Foreign Languages, Zhejiang Gongshang University, Hangzhou, China
- Graduate School of Education, University of Perpetual Help System DALTA, Metro Manila, Philippines
| | - Wurong Gao
- School of Foreign Languages, Zhejiang Gongshang University, Hangzhou, China
- *Correspondence: Wurong Gao,
| |
Collapse
|
10
|
Image-Based Learning Using Gradient Class Activation Maps for Enhanced Physiological Interpretability of Motor Imagery Skills. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Brain activity stimulated by the motor imagery paradigm (MI) is measured by Electroencephalography (EEG), which has several advantages to be implemented with the widely used Brain–Computer Interfaces (BCIs) technology. However, the substantial inter/intra variability of recorded data significantly influences individual skills on the achieved performance. This study explores the ability to distinguish between MI tasks and the interpretability of the brain’s ability to produce elicited mental responses with improved accuracy. We develop a Deep and Wide Convolutional Neuronal Network fed by a set of topoplots extracted from the multichannel EEG data. Further, we perform a visualization technique based on gradient-based class activation maps (namely, GradCam++) at different intervals along the MI paradigm timeline to account for intra-subject variability in neural responses over time. We also cluster the dynamic spatial representation of the extracted maps across the subject set to come to a deeper understanding of MI-BCI coordination skills. According to the results obtained from the evaluated GigaScience Database of motor-evoked potentials, the developed approach enhances the physiological explanation of motor imagery in aspects such as neural synchronization between rhythms, brain lateralization, and the ability to predict the MI onset responses and their evolution during training sessions.
Collapse
|
11
|
Cai J, Xiao R, Cui W, Zhang S, Liu G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front Syst Neurosci 2021; 15:729707. [PMID: 34887732 PMCID: PMC8649925 DOI: 10.3389/fnsys.2021.729707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 11/08/2021] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.
Collapse
Affiliation(s)
- Jing Cai
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Ruolan Xiao
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Wenjie Cui
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Shang Zhang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Guangda Liu
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
12
|
Liu H, Zhang Y, Li Y, Kong X. Review on Emotion Recognition Based on Electroencephalography. Front Comput Neurosci 2021; 15:758212. [PMID: 34658828 PMCID: PMC8518715 DOI: 10.3389/fncom.2021.758212] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 08/31/2021] [Indexed: 11/13/2022] Open
Abstract
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
Collapse
Affiliation(s)
- Haoran Liu
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Ying Zhang
- Patent Examination Cooperation (Henan) Center of the Patent Office, CNIPA, Zhengzhou, China
| | - Yujun Li
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Xiangyi Kong
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| |
Collapse
|