1
|
Ling Y, Nie F, Yu W, Li X. Discriminative and Robust Autoencoders for Unsupervised Feature Selection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:1622-1636. [PMID: 38090873 DOI: 10.1109/tnnls.2023.3333737] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Many recent research works on unsupervised feature selection (UFS) have focused on how to exploit autoencoders (AEs) to seek informative features. However, existing methods typically employ the squared error to estimate the data reconstruction, which amplifies the negative effect of outliers and can lead to performance degradation. Moreover, traditional AEs aim to extract latent features that capture intrinsic information of the data for accurate data recovery. Without incorporating explicit cluster structure-detecting objectives into the training criterion, AEs fail to capture the latent cluster structure of the data which is essential for identifying discriminative features. Thus, the selected features lack strong discriminative power. To address the issues, we propose to jointly perform robust feature selection and -means clustering in a unified framework. Concretely, we exploit an AE with a -norm as a basic model to seek informative features. To improve robustness against outliers, we introduce an adaptive weight vector for the data reconstruction terms of AE, which assigns smaller weights to the data with larger errors to automatically reduce the influence of the outliers, and larger weights to the data with smaller errors to strengthen the influence of clean data. To enhance the discriminative power of the selected features, we incorporate -means clustering into the representation learning of the AE. This allows the AE to continually explore cluster structure information, which can be used to discover more discriminative features. Then, we also present an efficient approach to solve the objective of the corresponding problem. Extensive experiments on various benchmark datasets are provided, which clearly demonstrate that the proposed method outperforms state-of-the-art methods.
Collapse
|
2
|
Zhao S, Fei L, Wen J, Zhang B, Zhao P, Li S. Structure Suture Learning-Based Robust Multiview Palmprint Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8401-8413. [PMID: 37015591 DOI: 10.1109/tnnls.2022.3227473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Low-quality palmprint images will degrade the recognition performance, when they are captured under the open, unconstraint, and low-illumination conditions. Moreover, the traditional single-view palmprint representation methods have been difficult to express the characteristics of each palm strongly, where the palmprint characteristics become weak. To tackle these issues, in this article, we propose a structure suture learning-based robust multiview palmprint recognition method (SSL_RMPR), which comprehensively presents the salient palmprint features from multiple views. Unlike the existing multiview palmprint representation methods, SSL_RMPR introduces a structure suture learning strategy to produce an elastic nearest neighbor graph (ENNG) on the reconstruction errors that simultaneously exploit the label information and the latent consensus structure of the multiview data, such that the discriminant palmprint representation can be adaptively enhanced. Meanwhile, a low-rank reconstruction term integrating with the projection matrix learning is proposed, in such a manner that the robustness of the projection matrix can be improved. Particularly, since no extra structure capture term is imposed into the proposed model, the complexity of the model can be greatly reduced. Experimental results have proven the superiority of the proposed SSL_RMPR by achieving the best recognition performances on a number of real-world palmprint databases.
Collapse
|
3
|
Li X. Positive-Incentive Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:8708-8714. [PMID: 37015646 DOI: 10.1109/tnnls.2022.3224577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Noise is conventionally viewed as a severe problem in diverse fields, e.g., engineering and learning systems. However, this brief aims to investigate whether the conventional proposition always holds. It begins with the definition of task entropy, which extends from the information entropy and measures the complexity of the task. After introducing the task entropy, the noise can be classified into two kinds, positive-incentive noise (Pi-noise or π -noise) and pure noise, according to whether the noise can reduce the complexity of the task. Interestingly, as shown theoretically and empirically, even the simple random noise can be the π -noise that simplifies the task. π -noise offers new explanations for some models and provides a new principle for some fields, such as multitask learning, adversarial training, and so on. Moreover, it reminds us to rethink the investigation of noises.
Collapse
|
4
|
Wang J, Xie F, Nie F, Li X. Generalized and Robust Least Squares Regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7006-7020. [PMID: 36264726 DOI: 10.1109/tnnls.2022.3213594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As a simple yet effective method, least squares regression (LSR) is extensively applied for data regression and classification. Combined with sparse representation, LSR can be extended to feature selection (FS) as well, in which l1 regularization is often applied in embedded FS algorithms. However, because the loss function is in the form of squared error, LSR and its variants are sensitive to noises, which significantly degrades the effectiveness and performance of classification and FS. To cope with the problem, we propose a generalized and robust LSR (GRLSR) for classification and FS, which is made up of arbitrary concave loss function and the l2,p -norm regularization term. Meanwhile, an iterative algorithm is applied to efficiently deal with the nonconvex minimization problem, in which an additional weight to suppress the effect of noises is added to each data point. The weights can be automatically assigned according to the error of the samples. When the error is large, the value of the corresponding weight is small. It is this mechanism that allows GRLSR to reduce the impact of noises and outliers. According to the different formulations of the concave loss function, four specific methods are proposed to clarify the essence of the framework. Comprehensive experiments on corrupted datasets have proven the advantage of the proposed method.
Collapse
|
5
|
Zhang Y, Kang Y, Guo X, Li P, He H. The effect analysis of shape design of different charging piles based on Human physiological characteristics using the MF-DFA. Sci Rep 2024; 14:8345. [PMID: 38594451 PMCID: PMC11004129 DOI: 10.1038/s41598-024-59147-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 04/08/2024] [Indexed: 04/11/2024] Open
Abstract
With the rapid development of new energy vehicles, the users have an increasing demand for charging piles. It is generally believed that the charging pile is a kind of practical product, and it only needs to realize the charging function. However, as a product, the shape design of the charging pile will directly affect the user experience, thus affecting product sales. Therefore, in the face of increasingly fierce market competition, when designing the shape of charging piles, it is necessary to adopt the traditional evaluation method and human physiological cognitive characteristics to evaluate the shape of charging piles more objectively. From the user's point of view, using the electroencephalogram (EEG) of the user, with the help of the multifractal detrended fluctuation analysis (MF-DFA) method, this paper comprehensively analyzes the differences in emotional cognitive characteristics between two kinds of charging piles, namely, the charging pile with a curved appearance design and the charging pile with square appearance design. The results show that there are significant differences in human physiological cognitive characteristics between two kinds of charging piles with different shapes. And different shapes of charging piles have different physiological cognitive differences for users. When designing charging pile product shapes, human beings can objectively evaluate the product shape design according to the physiological cognition differences of users, so as to optimize the charging pile product shape design.
Collapse
Affiliation(s)
- Yusheng Zhang
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China.
| | - Yaoyuan Kang
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| | - Xin Guo
- State Grid Electric Auto Service Co., Ltd, Xi'an, 710003, China
| | - Pan Li
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| | - Hanqing He
- Electric Power Research Institute of State Grid Shaanxi Electric Power Company, Xi'an, 710003, China
| |
Collapse
|
6
|
Wen J, Deng S, Fei L, Zhang Z, Zhang B, Zhang Z, Xu Y. Discriminative Regression With Adaptive Graph Diffusion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1797-1809. [PMID: 35767490 DOI: 10.1109/tnnls.2022.3185408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this article, we propose a new linear regression (LR)-based multiclass classification method, called discriminative regression with adaptive graph diffusion (DRAGD). Different from existing graph embedding-based LR methods, DRAGD introduces a new graph learning and embedding term, which explores the high-order structure information between four tuples, rather than conventional sample pairs to learn an intrinsic graph. Moreover, DRAGD provides a new way to simultaneously capture the local geometric structure and representation structure of data in one term. To enhance the discriminability of the transformation matrix, a retargeted learning approach is introduced. As a result of combining the above-mentioned techniques, DRAGD can flexibly explore more unsupervised information underlying the data and the label information to obtain the most discriminative transformation matrix for multiclass classification tasks. Experimental results on six well-known real-world databases and a synthetic database demonstrate that DRAGD is superior to the state-of-the-art LR methods.
Collapse
|
7
|
Chen Z, Wu XJ, Xu T, Kittler J. Discriminative Dictionary Pair Learning With Scale-Constrained Structured Representation for Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10225-10239. [PMID: 37015383 DOI: 10.1109/tnnls.2022.3165217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The dictionary pair learning (DPL) model aims to design a synthesis dictionary and an analysis dictionary to accomplish the goal of rapid sample encoding. In this article, we propose a novel structured representation learning algorithm based on the DPL for image classification. It is referred to as discriminative DPL with scale-constrained structured representation (DPL-SCSR). The proposed DPL-SCSR utilizes the binary label matrix of dictionary atoms to project the representation into the corresponding label space of the training samples. By imposing a non-negative constraint, the learned representation adaptively approximates a block-diagonal structure. This innovative transformation is also capable of controlling the scale of the block-diagonal representation by enforcing the sum of within-class coefficients of each sample to 1, which means that the dictionary atoms of each class compete to represent the samples from the same class. This implies that the requirement of similarity preservation is considered from the perspective of the constraint on the sum of coefficients. More importantly, the DPL-SCSR does not need to design a classifier in the representation space as the label matrix of the dictionary can also be used as an efficient linear classifier. Finally, the DPL-SCSR imposes the l2,p -norm on the analysis dictionary to make the process of feature extraction more interpretable. The DPL-SCSR seamlessly incorporates the scale-constrained structured representation learning, within-class similarity preservation of representation, and the linear classifier into one regularization term, which dramatically reduces the complexity of training and parameter tuning. The experimental results on several popular image classification datasets show that our DPL-SCSR can deliver superior performance compared with the state-of-the-art (SOTA) dictionary learning methods. The MATLAB code of this article is available at https://github.com/chenzhe207/DPL-SCSR.
Collapse
|
8
|
Wang J, Xie F, Nie F, Li X. Robust Supervised and Semisupervised Least Squares Regression Using ℓ 2,p-Norm Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8389-8403. [PMID: 35196246 DOI: 10.1109/tnnls.2022.3150102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Least squares regression (LSR) is widely applied in statistics theory due to its theoretical solution, which can be used in supervised, semisupervised, and multiclass learning. However, LSR begins to fail and its discriminative ability cannot be guaranteed when the original data have been corrupted and noised. In reality, the noises are unavoidable and could greatly affect the error construction in LSR. To cope with this problem, a robust supervised LSR (RSLSR) is proposed to eliminate the effect of noises and outliers. The loss function adopts l2,p -norm ( ) instead of square loss. In addition, the probability weight is added to each sample to determine whether the sample is a normal point or not. Its physical meaning is very clear, in which if the point is normal, the probability value is 1; otherwise, the weight is 0. To effectively solve the concave problem, an iterative algorithm is introduced, in which additional weights are added to penalize normal samples with large errors. We also extend RSLSR to robust semisupervised LSR (RSSLSR) to fully utilize the limited labeled samples. A large number of classification performances on corrupted data illustrate the robustness of the proposed methods.
Collapse
|
9
|
Chang W, Nie F, Zhi Y, Wang R, Li X. Multitask Learning for Classification Problem via New Tight Relaxation of Rank Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6055-6068. [PMID: 34914600 DOI: 10.1109/tnnls.2021.3132918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multitask learning (MTL) is a joint learning paradigm, which fuses multiple related tasks together to achieve the better performance than single-task learning methods. It has been observed by many researchers that different tasks with certain similarities share a low-dimensional common yet latent subspace. In order to get the low-rank structure shared across tasks, trace norm has been used as a convex relaxation of the rank minimization problem. However, trace norm is not a tight approximation for the rank function. To address this important issue, we propose two novel regularization-based models to approximate the rank minimization problem by minimizing the k minimal singular values. For our new models, if the minimal singular values are suppressed to zeros, the rank would also be reduced. Compared with the standard trace norm, our new regularization-based models are the tighter approximations, which can help our models capture the low-dimensional subspace among multiple tasks better. Besides, it is an NP-hard problem to directly solve the exact rank minimization problem for our models. In this article, we proposed two simple but effective strategies to optimize our models, which tactically solves the exact rank minimization problem by setting a large penalizing parameter. Experimental results performed on synthetic and real-world benchmark datasets demonstrate that the proposed models have the ability of learning the low-rank structure shared across tasks and the better performance than other classical MTL methods.
Collapse
|
10
|
Wang C, Yang Z, Ye J, Yang X. Kernel-Free Quadratic Surface Regression for Multi-Class Classification. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1103. [PMID: 37510050 PMCID: PMC10379108 DOI: 10.3390/e25071103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/14/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023]
Abstract
For multi-class classification problems, a new kernel-free nonlinear classifier is presented, called the hard quadratic surface least squares regression (HQSLSR). It combines the benefits of the least squares loss function and quadratic kernel-free trick. The optimization problem of HQSLSR is convex and unconstrained, making it easy to solve. Further, to improve the generalization ability of HQSLSR, a softened version (SQSLSR) is proposed by introducing an ε-dragging technique, which can enlarge the between-class distance. The optimization problem of SQSLSR is solved by designing an alteration iteration algorithm. The convergence, interpretability and computational complexity of our methods are addressed in a theoretical analysis. The visualization results on five artificial datasets demonstrate that the obtained regression function in each category has geometric diversity and the advantage of the ε-dragging technique. Furthermore, experimental results on benchmark datasets show that our methods perform comparably to some state-of-the-art classifiers.
Collapse
Affiliation(s)
- Changlin Wang
- College of Mathematics and Systems Science, Xinjiang University, Urumuqi 830046, China
- Institute of Mathematics and Physics, Xinjiang University, Urumuqi 830046, China
| | - Zhixia Yang
- College of Mathematics and Systems Science, Xinjiang University, Urumuqi 830046, China
- Institute of Mathematics and Physics, Xinjiang University, Urumuqi 830046, China
| | - Junyou Ye
- College of Mathematics and Systems Science, Xinjiang University, Urumuqi 830046, China
- Institute of Mathematics and Physics, Xinjiang University, Urumuqi 830046, China
| | - Xue Yang
- College of Mathematics and Systems Science, Xinjiang University, Urumuqi 830046, China
- Institute of Mathematics and Physics, Xinjiang University, Urumuqi 830046, China
| |
Collapse
|
11
|
Sha T, Zhang Y, Peng Y, Kong W. Semi-supervised regression with adaptive graph learning for EEG-based emotion recognition. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:11379-11402. [PMID: 37322987 DOI: 10.3934/mbe.2023505] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Electroencephalogram (EEG) signals are widely used in the field of emotion recognition since it is resistant to camouflage and contains abundant physiological information. However, EEG signals are non-stationary and have low signal-noise-ratio, making it more difficult to decode in comparison with data modalities such as facial expression and text. In this paper, we propose a model termed semi-supervised regression with adaptive graph learning (SRAGL) for cross-session EEG emotion recognition, which has two merits. On one hand, the emotional label information of unlabeled samples is jointly estimated with the other model variables by a semi-supervised regression in SRAGL. On the other hand, SRAGL adaptively learns a graph to depict the connections among EEG data samples which further facilitates the emotional label estimation process. From the experimental results on the SEED-IV data set, we have the following insights. 1) SRAGL achieves superior performance compared to some state-of-the-art algorithms. To be specific, the average accuracies are 78.18%, 80.55%, and 81.90% in the three cross-session emotion recognition tasks. 2) As the iteration number increases, SRAGL converges quickly and optimizes the emotion metric of EEG samples gradually, leading to a reliable similarity matrix finally. 3) Based on the learned regression projection matrix, we obtain the contribution of each EEG feature, which enables us to automatically identify critical frequency bands and brain regions in emotion recognition.
Collapse
Affiliation(s)
- Tianhui Sha
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Yikai Zhang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Yong Peng
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
- Zhejiang Key Laboratory of Brain-Machine Collaborative Intelligence, Hangzhou, 310018, China
| | - Wanzeng Kong
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
- Zhejiang Key Laboratory of Brain-Machine Collaborative Intelligence, Hangzhou, 310018, China
| |
Collapse
|
12
|
Ying X, Liu H, Huang R. COVID-19 chest X-ray image classification in the presence of noisy labels. DISPLAYS 2023; 77:102370. [PMID: 36644695 PMCID: PMC9826538 DOI: 10.1016/j.displa.2023.102370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 12/12/2022] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
The Corona Virus Disease 2019 (COVID-19) has been declared a worldwide pandemic, and a key method for diagnosing COVID-19 is chest X-ray imaging. The application of convolutional neural network with medical imaging helps to diagnose the disease accurately, where the label quality plays an important role in the classification problem of COVID-19 chest X-rays. However, most of the existing classification methods ignore the problem that the labels are hardly completely true and effective, and noisy labels lead to a significant degradation in the performance of image classification frameworks. In addition, due to the wide distribution of lesions and the large number of local features of COVID-19 chest X-ray images, existing label recovery algorithms have to face the bottleneck problem of the difficult reuse of noisy samples. Therefore, this paper introduces a general classification framework for COVID-19 chest X-ray images with noisy labels and proposes a noisy label recovery algorithm based on subset label iterative propagation and replacement (SLIPR). Specifically, the proposed algorithm first obtains random subsets of the samples multiple times. Then, it integrates several techniques such as principal component analysis, low-rank representation, neighborhood graph regularization, and k-nearest neighbor for feature extraction and image classification. Finally, multi-level weight distribution and replacement are performed on the labels to cleanse the noise. In addition, for the label-recovered dataset, high confidence samples are further selected as the training set to improve the stability and accuracy of the classification framework without affecting its inherent performance. In this paper, three typical datasets are chosen to conduct extensive experiments and comparisons of existing algorithms under different metrics. Experimental results on three publicly available COVID-19 chest X-ray image datasets show that the proposed algorithm can effectively recover noisy labels and improve the accuracy of the image classification framework by 18.9% on the Tawsifur dataset, 19.92% on the Skytells dataset, and 16.72% on the CXRs dataset. Compared to the state-of-the-art algorithms, the gain of classification accuracy of SLIPR on the three datasets can reach 8.67%-19.38%, and the proposed algorithm also has certain scalability while ensuring data integrity.
Collapse
Affiliation(s)
- Xiaoqing Ying
- Collage of Information Science and Technology, Donghua University, Shanghai 201620, China
| | - Hao Liu
- Collage of Information Science and Technology, Donghua University, Shanghai 201620, China
- Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, Shanghai 201620, China
| | - Rong Huang
- Collage of Information Science and Technology, Donghua University, Shanghai 201620, China
| |
Collapse
|
13
|
Xia S, Zheng S, Wang G, Gao X, Wang B. Granular Ball Sampling for Noisy Label Classification or Imbalanced Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2144-2155. [PMID: 34460405 DOI: 10.1109/tnnls.2021.3105984] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article presents a general sampling method, called granular-ball sampling (GBS), for classification problems by introducing the idea of granular computing. The GBS method uses some adaptively generated hyperballs to cover the data space, and the points on the hyperballs constitute the sampled data. GBS is the first sampling method that not only reduces the data size but also improves the data quality in noisy label classification. In addition, because the GBS method can be used to exactly describe the boundary, it can obtain almost the same classification accuracy as the results on the original datasets, and it can obtain an obviously higher classification accuracy than random sampling. Therefore, for the data reduction classification task, GBS is a general method that is not especially restricted by any specific classifier or dataset. Moreover, the GBS can be effectively used as an undersampling method for imbalanced classification. It has a time complexity that is close to O( N ), so it can accelerate most classifiers. These advantages make GBS powerful for improving the performance of classifiers. All codes have been released in the open source GBS library at http://www.cquptshuyinxia.com/GBS.html.
Collapse
|
14
|
Liu Z, Lai Z, Ou W, Zhang K, Huo H. Discriminative sparse least square regression for semi-supervised learning. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
15
|
Ruan W, Sun L. Robust latent discriminant adaptive graph preserving learning for image feature extraction. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
16
|
Sha T, Peng Y. Orthogonal semi-supervised regression with adaptive label dragging for cross-session EEG emotion recognition. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
17
|
Regularized denoising latent subspace based linear regression for image classification. Pattern Anal Appl 2023. [DOI: 10.1007/s10044-023-01149-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
|
18
|
Noise-related face image recognition based on double dictionary transform learning. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
19
|
Hu L, Zhang W, Dai Z. Joint Sparse Locality-Aware Regression for Robust Discriminative Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12245-12258. [PMID: 34166212 DOI: 10.1109/tcyb.2021.3080128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the dramatic increase of dimensions in the data representation, extracting latent low-dimensional features becomes of the utmost importance for efficient classification. Aiming at the problems of weakly discriminating marginal representation and difficulty in revealing the data manifold structure in most of the existing linear discriminant methods, we propose a more powerful discriminant feature extraction framework, namely, joint sparse locality-aware regression (JSLAR). In our model, we formulate a new strategy induced by the nonsquared L2 norm for enhancing the local intraclass compactness of the data manifold, which can achieve the joint learning of the locality-aware graph structure and the desirable projection matrix. Besides, we formulate a weighted retargeted regression to perform the marginal representation learning adaptively instead of using the general average interclass margin. To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by forcing the row sparsity with the joint L2,1 norms. Then, we derive an effective iterative algorithm for solving the proposed model. The experimental results over a range of benchmark databases demonstrate that the proposed JSLAR outperforms some state-of-the-art approaches.
Collapse
|
20
|
Dornaika F, Moujahid A. Feature and instance selection through discriminant analysis criteria. Soft comput 2022. [DOI: 10.1007/s00500-022-07513-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
Yang Z, Wu X, Huang P, Zhang F, Wan M, Lai Z. Orthogonal Autoencoder Regression for Image Classification. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
22
|
Wang F, Wang H, Zhou X, Fu R. Study on the Effect of Judgment Excitation Mode to Relieve Driving Fatigue Based on MF-DFA. Brain Sci 2022; 12:brainsci12091199. [PMID: 36138935 PMCID: PMC9496687 DOI: 10.3390/brainsci12091199] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 09/01/2022] [Accepted: 09/02/2022] [Indexed: 11/21/2022] Open
Abstract
Driving fatigue refers to a phenomenon in which a driver’s physiological and psychological functions become unbalanced after a long period of continuous driving, and their driving skills decline objectively. The hidden dangers of driving fatigue to traffic safety should not be underestimated. In this work, we propose a judgment excitation mode (JEM), which adds secondary cognitive tasks to driving behavior through dual-channel human–computer interaction, so as to delay the occurrence of driving fatigue. We used multifractal detrended fluctuation analysis (MF-DFA) to study the dynamic properties of subjects’ EEG, and analyzed the effect of JEM on fatigue retardation by Hurst exponent value and multifractal spectrum width value. The results show that the multifractal properties of the two driving modes (normal driving mode and JEM) are significantly different. The JEM we propose can effectively delay the occurrence of driving fatigue, and has good prospects for future practical applications.
Collapse
Affiliation(s)
- Fuwang Wang
- School of Mechanic Engineering, Northeast Electric Power University, Jilin City 132012, China
- Correspondence: or
| | - Hao Wang
- School of Mechanic Engineering, Northeast Electric Power University, Jilin City 132012, China
| | - Xin Zhou
- School of Mechanic Engineering, Northeast Electric Power University, Jilin City 132012, China
| | - Rongrong Fu
- College of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China
| |
Collapse
|
23
|
Jiang B, Xiang J, Wu X, Wang Y, Chen H, Cao W, Sheng W. Robust multi-view learning via adaptive regression. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.08.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
24
|
Yang Y, Xue Z, Ma J, Chang X. Robust projection twin extreme learning machines with capped L1-norm distance metric. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
25
|
Zheng W, Chen S, Fu Z, Zhu F, Yan H, Yang J. Feature Selection Boosted by Unselected Features. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4562-4574. [PMID: 33646957 DOI: 10.1109/tnnls.2021.3058172] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Feature selection aims to select strongly relevant features and discard the rest. Recently, embedded feature selection methods, which incorporate feature weights learning into the training process of a classifier, have attracted much attention. However, traditional embedded methods merely focus on the combinatorial optimality of all selected features. They sometimes select the weakly relevant features with satisfactory combination abilities and leave out some strongly relevant features, thereby degrading the generalization performance. To address this issue, we propose a novel embedded framework for feature selection, termed feature selection boosted by unselected features (FSBUF). Specifically, we introduce an extra classifier for unselected features into the traditional embedded model and jointly learn the feature weights to maximize the classification loss of unselected features. As a result, the extra classifier recycles the unselected strongly relevant features to replace the weakly relevant features in the selected feature subset. Our final objective can be formulated as a minimax optimization problem, and we design an effective gradient-based algorithm to solve it. Furthermore, we theoretically prove that the proposed FSBUF is able to improve the generalization ability of traditional embedded feature selection methods. Extensive experiments on synthetic and real-world data sets exhibit the comprehensibility and superior performance of FSBUF.
Collapse
|
26
|
Ran X, Shi J, Chen Y, Jiang K. Multimodal neuroimage data fusion based on multikernel learning in personalized medicine. Front Pharmacol 2022; 13:947657. [PMID: 36059988 PMCID: PMC9428611 DOI: 10.3389/fphar.2022.947657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Neuroimaging has been widely used as a diagnostic technique for brain diseases. With the development of artificial intelligence, neuroimaging analysis using intelligent algorithms can capture more image feature patterns than artificial experience-based diagnosis. However, using only single neuroimaging techniques, e.g., magnetic resonance imaging, may omit some significant patterns that may have high relevance to the clinical target. Therefore, so far, combining different types of neuroimaging techniques that provide multimodal data for joint diagnosis has received extensive attention and research in the area of personalized medicine. In this study, based on the regularized label relaxation linear regression model, we propose a multikernel version for multimodal data fusion. The proposed method inherits the merits of the regularized label relaxation linear regression model and also has its own superiority. It can explore complementary patterns across different modal data and pay more attention to the modal data that have more significant patterns. In the experimental study, the proposed method is evaluated in the scenario of Alzheimer’s disease diagnosis. The promising performance indicates that the performance of multimodality fusion via multikernel learning is better than that of single modality. Moreover, the decreased square difference between training and testing performance indicates that overfitting is reduced and hence the generalization ability is improved.
Collapse
|
27
|
Wang C, Chen X, Yuan G, Nie F, Yang M. Semisupervised Feature Selection With Sparse Discriminative Least Squares Regression. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8413-8424. [PMID: 33872166 DOI: 10.1109/tcyb.2021.3060804] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In big data time, selecting informative features has become an urgent need. However, due to the huge cost of obtaining enough labeled data for supervised tasks, researchers have turned their attention to semisupervised learning, which exploits both labeled and unlabeled data. In this article, we propose a sparse discriminative semisupervised feature selection (SDSSFS) method. In this method, the ϵ -dragging technique for the supervised task is extended to the semisupervised task, which is used to enlarge the distance between classes in order to obtain a discriminative solution. The flexible l2,p norm is implicitly used as regularization in the new model. Therefore, we can obtain a more sparse solution by setting smaller p . An iterative method is proposed to simultaneously learn the regression coefficients and ϵ -dragging matrix and predicting the unknown class labels. Experimental results on ten real-world datasets show the superiority of our proposed method.
Collapse
|
28
|
Zhang Q, Cheng Y, Zhao F, Wang G, Xia S. Optimal Scale Combination Selection Integrating Three-Way Decision With Hasse Diagram. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3675-3689. [PMID: 33635795 DOI: 10.1109/tnnls.2021.3054063] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multi-scale decision system (MDS) is an effective tool to describe hierarchical data in machine learning. Optimal scale combination (OSC) selection and attribute reduction are two key issues related to knowledge discovery in MDSs. However, searching for all OSCs may result in a combinatorial explosion, and the existing approaches typically incur excessive time consumption. In this study, searching for all OSCs is considered as an optimization problem with the scale space as the search space. Accordingly, a sequential three-way decision model of the scale space is established to reduce the search space by integrating three-way decision with the Hasse diagram. First, a novel scale combination is proposed to perform scale selection and attribute reduction simultaneously, and then an extended stepwise optimal scale selection (ESOSS) method is introduced to quickly search for a single local OSC on a subset of the scale space. Second, based on the obtained local OSCs, a sequential three-way decision model of the scale space is established to divide the search space into three pair-wise disjoint regions, namely the positive, negative, and boundary regions. The boundary region is regarded as a new search space, and it can be proved that a local OSC on the boundary region is also a global OSC. Therefore, all OSCs of a given MDS can be obtained by searching for the local OSCs on the boundary regions in a step-by-step manner. Finally, according to the properties of the Hasse diagram, a formula for calculating the maximal elements of a given boundary region is provided to alleviate space complexity. Accordingly, an efficient OSC selection algorithm is proposed to improve the efficiency of searching for all OSCs by reducing the search space. The experimental results demonstrate that the proposed method can significantly reduce computational time.
Collapse
|
29
|
Nie F, Wang Z, Tian L, Wang R, Li X. Subspace Sparse Discriminative Feature Selection. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:4221-4233. [PMID: 33055053 DOI: 10.1109/tcyb.2020.3025205] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this article, we propose a novel feature selection approach via explicitly addressing the long-standing subspace sparsity issue. Leveraging l2,1 -norm regularization for feature selection is the major strategy in existing methods, which, however, confronts sparsity limitation and parameter-tuning trouble. To circumvent this problem, employing the l2,0 -norm constraint to improve the sparsity of the model has gained more attention recently whereas, optimizing the subspace sparsity constraint is still an unsolved problem, which only can acquire an approximate solution and without convergence proof. To address the above challenges, we innovatively propose a novel subspace sparsity discriminative feature selection (S2DFS) method which leverages a subspace sparsity constraint to avoid tuning parameters. In addition, the trace ratio formulated objective function extremely ensures the discriminability of selected features. Most important, an efficient iterative optimization algorithm is presented to explicitly solve the proposed problem with a closed-form solution and strict convergence proof. To the best of our knowledge, such an optimization algorithm of solving the subspace sparsity issue is first proposed in this article, and a general formulation of the optimization algorithm is provided for improving the extensibility and portability of our method. Extensive experiments conducted on several high-dimensional text and image datasets demonstrate that the proposed method outperforms related state-of-the-art methods in pattern classification and image retrieval tasks.
Collapse
|
30
|
Regularized discriminative broad learning system for image classification. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
31
|
Guo Y, Sun H, Hao S. Adaptive dictionary and structure learning for unsupervised feature selection. Inf Process Manag 2022. [DOI: 10.1016/j.ipm.2022.102931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
32
|
Transfer Subspace Learning based on Double Relaxed Regression for Image Classification. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03213-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Li X, Wang Y, Ruiz R. A Survey on Sparse Learning Models for Feature Selection. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1642-1660. [PMID: 32386172 DOI: 10.1109/tcyb.2020.2982445] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Feature selection is important in both machine learning and pattern recognition. Successfully selecting informative features can significantly increase learning accuracy and improve result comprehensibility. Various methods have been proposed to identify informative features from high-dimensional data by removing redundant and irrelevant features to improve classification accuracy. In this article, we systematically survey existing sparse learning models for feature selection from the perspectives of individual sparse feature selection and group sparse feature selection, and analyze the differences and connections among various sparse learning models. Promising research directions and topics on sparse learning models are analyzed.
Collapse
|
34
|
Huang P, Yang Z, Wang W, Zhang F. Denoising Low-Rank Discrimination based Least Squares Regression for image classification. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Deep multi-view feature learning for detecting COVID-19 based on chest X-ray images. Biomed Signal Process Control 2022; 75:103595. [PMID: 35222680 PMCID: PMC8864146 DOI: 10.1016/j.bspc.2022.103595] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 02/04/2022] [Accepted: 02/20/2022] [Indexed: 12/14/2022]
Abstract
Aim COVID-19 is a pandemic infectious disease which has influenced the life and health of many communities since December 2019. Due to the rapid worldwide spread of this highly contagious disease, making its early detection with high accuracy important for breaking the chain of transition. X-ray images of COVID-19 patients, reveal specific abnormalities associated with this disease. Methods In this study, a multi-view feature learning method for detecting COVID-19 based on chest X-ray images is presented. This method provides a framework for exploiting the multiple types of deep features, which is able to preserve both the correlative and the complementary information, and achieve accurate detection at the classification phase. Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. The learned feature representation of X-ray images are then classified using ELM. Results The experiments show that our method achieves accuracy scores of 100%, 99.82%, and 99.82% in detecting three classes of COVID-19, normal, and pneumonia, respectively. The sensitivities of three classes are 100%, 100%, and 99.45%, respectively. The specificities of three classes are 100%, 99.73%, and 100%, respectively. The precision values of three classes are 100%, 99.45%, and 100%, respectively. The F-scores of three classes are 100%, 99.73%, and 99.72%, respectively. The overall accuracy score of our method is 99.82%. Conclusions The results demonstrate the effectiveness of our method in detecting COVID-19 cases and can therefore assist experts in early diagnosis based on X-ray images.
Collapse
|
36
|
When Multi-view Classification Meets Ensemble Learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
37
|
Ma J, Zhou S. Discriminative least squares regression for multiclass classification based on within-class scatter minimization. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02258-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
38
|
Lu J, Lai Z, Wang H, Chen Y, Zhou J, Shen L. Generalized Embedding Regression: A Framework for Supervised Feature Extraction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:185-199. [PMID: 33147149 DOI: 10.1109/tnnls.2020.3027602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Sparse discriminative projection learning has attracted much attention due to its good performance in recognition tasks. In this article, a framework called generalized embedding regression (GER) is proposed, which can simultaneously perform low-dimensional embedding and sparse projection learning in a joint objective function with a generalized orthogonal constraint. Moreover, the label information is integrated into the model to preserve the global structure of data, and a rank constraint is imposed on the regression matrix to explore the underlying correlation structure of classes. Theoretical analysis shows that GER can obtain the same or approximate solution as some related methods with special settings. By utilizing this framework as a general platform, we design a novel supervised feature extraction approach called jointly sparse embedding regression (JSER). In JSER, we construct an intrinsic graph to characterize the intraclass similarity and a penalty graph to indicate the interclass separability. Then, the penalty graph Laplacian is used as the constraint matrix in the generalized orthogonal constraint to deal with interclass marginal points. Moreover, the L2,1 -norm is imposed on the regression terms for robustness to outliers and data's variations and the regularization term for jointly sparse projection learning, leading to interesting semantic interpretability. An effective iterative algorithm is elaborately designed to solve the optimization problem of JSER. Theoretically, we prove that the subproblem of JSER is essentially an unbalanced Procrustes problem and can be solved iteratively. The convergence of the designed algorithm is also proved. Experimental results on six well-known data sets indicate the competitive performance and latent properties of JSER.
Collapse
|
39
|
Wang Y, Yang L. Joint learning adaptive metric and optimal classification hyperplane. Neural Netw 2022; 148:111-120. [DOI: 10.1016/j.neunet.2022.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 01/05/2022] [Accepted: 01/06/2022] [Indexed: 11/30/2022]
|
40
|
Abstract
Background:
Therapeutic peptide prediction is critical for drug development and therapy. Researchers have been studying this essential task, developing several computational methods to identify different therapeutic peptide types.
Objective:
Most predictors are the specific methods for certain peptides. Currently, developing methods to predict the presence of multiple peptides remains a challenging problem. Moreover, it is still challenging to combine different features to make the therapeutic prediction.
Method:
In this paper, we proposed a new ensemble method TP-MV for general therapeutic peptide recognition. TP-MV is developed using the stacking framework in conjunction with the KNN, SVM, ET, RF, and XGB. Then TP-MV constructs a multi-view learning model as meta-classifiers to extract the discriminative feature for different peptides.
Results:
In the experiment, the proposed method outperforms the other existing methods on the benchmark datasets, indicating that the proposed method has the ability to predict multiple therapeutic peptides simultaneously.
Conclusion:
The TP-MV is a useful tool for predicting therapeutic peptides.
Collapse
Affiliation(s)
- Ke Yan
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Hongwu Lv
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Yichen Guo
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jie Wen
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Guangdong, China
| | - Bin Liu
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
41
|
|
42
|
Gao M, Liu R, Mao J. Noise Robustness Low-Rank Learning Algorithm for Electroencephalogram Signal Classification. Front Neurosci 2021; 15:797378. [PMID: 34899177 PMCID: PMC8652211 DOI: 10.3389/fnins.2021.797378] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Electroencephalogram (EEG) is often used in clinical epilepsy treatment to monitor electrical signal changes in the brain of patients with epilepsy. With the development of signal processing and artificial intelligence technology, artificial intelligence classification method plays an important role in the automatic recognition of epilepsy EEG signals. However, traditional classifiers are easily affected by impurities and noise in epileptic EEG signals. To solve this problem, this paper develops a noise robustness low-rank learning (NRLRL) algorithm for EEG signal classification. NRLRL establishes a low-rank subspace to connect the original data space and label space. Making full use of supervision information, it considers the local information preservation of samples to ensure the low-rank representation of within-class compactness and between-classes dispersion. The asymmetric least squares support vector machine (aLS-SVM) is embedded into the objective function of NRLRL. The aLS-SVM finds the maximum quantile distance between the two classes of samples based on the pinball loss function, which further improves the noise robustness of the model. Several classification experiments with different noise intensity are designed on the Bonn data set, and the experiment results verify the effectiveness of the NRLRL algorithm.
Collapse
Affiliation(s)
- Ming Gao
- College of Sports Science and Technology, Wuhan Sports University, Wuhan, China
| | - Runmin Liu
- College of Sports Engineering and Information Technology, Wuhan Sports University, Wuhan, China
| | - Jie Mao
- College of Sports Engineering and Information Technology, Wuhan Sports University, Wuhan, China
| |
Collapse
|
43
|
|
44
|
Wang Z, Nie F, Zhang C, Wang R, Li X. Joint nonlinear feature selection and continuous values regression network. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.06.035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
45
|
Mokhtia M, Eftekhari M, Saberi-Movahed F. Dual-manifold regularized regression models for feature selection based on hesitant fuzzy correlation. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107308] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
46
|
|
47
|
Wang S, Ge H, Yang J, Su S. Virtual samples based robust block-diagonal dictionary learning for face recognition. INTELL DATA ANAL 2021. [DOI: 10.3233/ida-205466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
It is an open question to learn an over-complete dictionary from a limited number of face samples, and the inherent attributes of the samples are underutilized. Besides, the recognition performance may be adversely affected by the noise (and outliers), and the strict binary label based linear classifier is not appropriate for face recognition. To solve above problems, we propose a virtual samples based robust block-diagonal dictionary learning for face recognition. In the proposed model, the original samples and virtual samples are combined to solve the small sample size problem, and both the structure constraint and the low rank constraint are exploited to preserve the intrinsic attributes of the samples. In addition, the fidelity term can effectively reduce negative effects of noise (and outliers), and the ε-dragging is utilized to promote the performance of the linear classifier. Finally, extensive experiments are conducted in comparison with many state-of-the-art methods on benchmark face datasets, and experimental results demonstrate the efficacy of the proposed method.
Collapse
Affiliation(s)
- Shuangxi Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
- Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, Wuxi, Jiangsu, China
| | - Hongwei Ge
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
- Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, Wuxi, Jiangsu, China
| | - Jinlong Yang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
- Key Laboratory of Advanced Process Control for Light Industry (Jiangnan University), Ministry of Education, Wuxi, Jiangsu, China
| | - Shuzhi Su
- School of Computer Science and Engineering, Anhui University of Science & Technology, Huainan, Anhui, China
| |
Collapse
|
48
|
Bhadra T, Bandyopadhyay S. Supervised feature selection using integration of densest subgraph finding with floating forward–backward search. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.02.034] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
49
|
Zhang X, Fan M, Wang D, Zhou P, Tao D. Top-k Feature Selection Framework Using Robust 0-1 Integer Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3005-3019. [PMID: 32735538 DOI: 10.1109/tnnls.2020.3009209] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Feature selection (FS), which identifies the relevant features in a data set to facilitate subsequent data analysis, is a fundamental problem in machine learning and has been widely studied in recent years. Most FS methods rank the features in order of their scores based on a specific criterion and then select the k top-ranked features, where k is the number of desired features. However, these features are usually not the top- k features and may present a suboptimal choice. To address this issue, we propose a novel FS framework in this article to select the exact top- k features in the unsupervised, semisupervised, and supervised scenarios. The new framework utilizes the l0,2 -norm as the matrix sparsity constraint rather than its relaxations, such as the l1,2 -norm. Since the l0,2 -norm constrained problem is difficult to solve, we transform the discrete l0,2 -norm-based constraint into an equivalent 0-1 integer constraint and replace the 0-1 integer constraint with two continuous constraints. The obtained top- k FS framework with two continuous constraints is theoretically equivalent to the l0,2 -norm constrained problem and can be optimized by the alternating direction method of multipliers (ADMM). Unsupervised and semisupervised FS methods are developed based on the proposed framework, and extensive experiments on real-world data sets are conducted to demonstrate the effectiveness of the proposed FS framework.
Collapse
|
50
|
Song P, Zheng W, Yu Y, Ou S. Speech Emotion Recognition Based on Robust Discriminative Sparse Regression. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.2990928] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|