1
|
Wang S, Nie F, Wang Z, Wang R, Li X. Data Subdivision Based Dual-Weighted Robust Principal Component Analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:1271-1284. [PMID: 40031536 DOI: 10.1109/tip.2025.3536197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Principal Component Analysis (PCA) is one of the most important unsupervised dimensionality reduction algorithms, which uses squared -norm to make it very sensitive to outliers. Those improved versions based on -norm alleviate this problem, but they have other shortcomings, such as optimization difficulties or lack of rotational invariance, etc. Besides, existing methods only vaguely divide normal samples and outliers to improve robustness, but they ignore the fact that normal samples can be more specifically divided into positive samples and hard samples, which should have different contributions to the model because positive samples are more conducive to learning the projection matrix. In this paper, we propose a novel Data Subdivision Based Dual-Weighted Robust Principal Component Analysis, namely DRPCA, which firstly designs a mark vector to distinguish normal samples and outliers, and directly removes outliers according to mark weights. Moreover, we further divide normal samples into positive samples and hard samples by self-constrained weights, and place them in relative positions, so that the weight of positive samples is larger than hard samples, which makes the projection matrix more accurate. Additionally, the optimal mean is employed to obtain a more accurate data center. To solve this problem, we carefully design an effective iterative algorithm and analyze its convergence. Experiments on real-world and RGB large-scale datasets demonstrate the superiority of our method in dimensionality reduction and anomaly detection.
Collapse
|
2
|
Chen Y, Zhao YP, Wang S, Chen J, Zhang Z. Partial Tubal Nuclear Norm-Regularized Multiview Subspace Learning. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3777-3790. [PMID: 37058384 DOI: 10.1109/tcyb.2023.3263175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this article, a unified multiview subspace learning model, called partial tubal nuclear norm-regularized multiview subspace learning (PTN2MSL), was proposed for unsupervised multiview subspace clustering (MVSC), semisupervised MVSC, and multiview dimension reduction. Unlike most of the existing methods which treat the above three related tasks independently, PTN2MSL integrates the projection learning and the low-rank tensor representation to promote each other and mine their underlying correlations. Moreover, instead of minimizing the tensor nuclear norm which treats all singular values equally and neglects their differences, PTN2MSL develops the partial tubal nuclear norm (PTNN) as a better alternative solution by minimizing the partial sum of tubal singular values. The PTN2MSL method was applied to the above three multiview subspace learning tasks. It demonstrated that these tasks organically benefited from each other and PTN2MSL has achieved better performance in comparison to state-of-the-art methods.
Collapse
|
3
|
Abhadiomhen SE, Shen XJ, Song H, Tian S. Image edge preservation via low-rank residuals for robust subspace learning. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:53715-53741. [DOI: 10.1007/s11042-023-17423-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/07/2023] [Accepted: 10/01/2023] [Indexed: 12/04/2024]
|
4
|
Kong Z, Chang D, Fu Z, Wang J, Wang Y, Zhao Y. Projection-preserving block-diagonal low-rank representation for subspace clustering. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
5
|
Wang J, Xie F, Nie F, Li X. Unsupervised Adaptive Embedding for Dimensionality Reduction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6844-6855. [PMID: 34101602 DOI: 10.1109/tnnls.2021.3083695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
High-dimensional data are highly correlative and redundant, making it difficult to explore and analyze. Amount of unsupervised dimensionality reduction (DR) methods has been proposed, in which constructing a neighborhood graph is the primary step of DR methods. However, there exist two problems: 1) the construction of graph is usually separate from the selection of projection direction and 2) the original data are inevitably noisy. In this article, we propose an unsupervised adaptive embedding (UAE) method for DR to solve these challenges, which is a linear graph-embedding method. First, an adaptive allocation method of neighbors is proposed to construct the affinity graph. Second, the construction of affinity graph and calculation of projection matrix are integrated together. It considers the local relationship between samples and global characteristic of high-dimensional data, in which the cleaned data matrix is originally proposed to remove noise in subspace. The relationship between our method and local preserving projections (LPPs) is also explored. Finally, an alternative iteration optimization algorithm is derived to solve our model, the convergence and computational complexity of which are also analyzed. Comprehensive experiments on synthetic and benchmark datasets illustrate the superiority of our method.
Collapse
|
6
|
Li Y, Zhou J, Tian J, Zheng X, Tang YY. Weighted Error Entropy-Based Information Theoretic Learning for Robust Subspace Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4228-4242. [PMID: 33606640 DOI: 10.1109/tnnls.2021.3056188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In most of the existing representation learning frameworks, the noise contaminating the data points is often assumed to be independent and identically distributed (i.i.d.), where the Gaussian distribution is often imposed. This assumption, though greatly simplifies the resulting representation problems, may not hold in many practical scenarios. For example, the noise in face representation is usually attributable to local variation, random occlusion, and unconstrained illumination, which is essentially structural, and hence, does not satisfy the i.i.d. property or the Gaussianity. In this article, we devise a generic noise model, referred to as independent and piecewise identically distributed (i.p.i.d.) model for robust presentation learning, where the statistical behavior of the underlying noise is characterized using a union of distributions. We demonstrate that our proposed i.p.i.d. model can better describe the complex noise encountered in practical scenarios and accommodate the traditional i.i.d. one as a special case. Assisted by the proposed noise model, we then develop a new information-theoretic learning framework for robust subspace representation through a novel minimum weighted error entropy criterion. Thanks to the superior modeling capability of the i.p.i.d. model, our proposed learning method achieves superior robustness against various types of noise. When applying our scheme to the subspace clustering and image recognition problems, we observe significant performance gains over the existing approaches.
Collapse
|
7
|
Yue J, Fang L, He M. Spectral-Spatial Latent Reconstruction for Open-Set Hyperspectral Image Classification. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5227-5241. [PMID: 35914047 DOI: 10.1109/tip.2022.3193747] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Deep learning-based methods have produced significant gains for hyperspectral image (HSI) classification in recent years, leading to high impact academic achievements and industrial applications. Despite the success of deep learning-based methods in HSI classification, they still lack the robustness of handling unknown object in open-set environment (OSE). Open-set classification is to deal with the problem of unknown classes that are not included in the training set, while in closed-set environment (CSE), unknown classes will not appear in the test set. The existing open-set classifiers almost entirely rely on the supervision information given by the known classes in the training set, which leads to the specialization of the learned representations into known classes, and makes it easy to classify unknown classes as known classes. To improve the robustness of HSI classification methods in OSE and meanwhile maintain the classification accuracy of known classes, a spectral-spatial latent reconstruction framework which simultaneously conducts spectral feature reconstruction, spatial feature reconstruction and pixel-wise classification in OSE is proposed. By reconstructing the spectral and spatial features of HSI, the learned feature representation is enhanced, so as to retain the spectral-spatial information useful for rejecting unknown classes and distinguishing known classes. The proposed method uses latent representations for spectral-spatial reconstruction, and achieves robust unknown detection without compromising the accuracy of known classes. Experimental results show that the performance of the proposed method outperforms the existing state-of-the-art methods in OSE.
Collapse
|
8
|
Liu Z, Jin W, Mu Y. Learning robust graph for clustering. INT J INTELL SYST 2022. [DOI: 10.1002/int.22901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Zheng Liu
- College of Control Science and Engineering, Research Center for Analytical Instrumentation, Institute of Cyber‐Systems and Control, State Key Laboratory of Industrial Control Technology Zhejiang University Hangzhou China
| | - Wei Jin
- College of Control Science and Engineering, Research Center for Analytical Instrumentation, Institute of Cyber‐Systems and Control, State Key Laboratory of Industrial Control Technology Zhejiang University Hangzhou China
- College of Control Science and Engineering Huzhou Institute of Zhejiang University Huzhou China
| | - Ying Mu
- College of Control Science and Engineering, Research Center for Analytical Instrumentation, Institute of Cyber‐Systems and Control, State Key Laboratory of Industrial Control Technology Zhejiang University Hangzhou China
| |
Collapse
|
9
|
Han S, Wang N, Guo Y, Tang F, Xu L, Ju Y, Shi L. Application of Sparse Representation in Bioinformatics. Front Genet 2021; 12:810875. [PMID: 34976030 PMCID: PMC8715914 DOI: 10.3389/fgene.2021.810875] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/01/2021] [Indexed: 11/15/2022] Open
Abstract
Inspired by L1-norm minimization methods, such as basis pursuit, compressed sensing, and Lasso feature selection, in recent years, sparse representation shows up as a novel and potent data processing method and displays powerful superiority. Researchers have not only extended the sparse representation of a signal to image presentation, but also applied the sparsity of vectors to that of matrices. Moreover, sparse representation has been applied to pattern recognition with good results. Because of its multiple advantages, such as insensitivity to noise, strong robustness, less sensitivity to selected features, and no “overfitting” phenomenon, the application of sparse representation in bioinformatics should be studied further. This article reviews the development of sparse representation, and explains its applications in bioinformatics, namely the use of low-rank representation matrices to identify and study cancer molecules, low-rank sparse representations to analyze and process gene expression profiles, and an introduction to related cancers and gene expression profile database.
Collapse
Affiliation(s)
- Shuguang Han
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Ning Wang
- Beidahuang Industry Group General Hospital, Harbin, China
| | - Yuxin Guo
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
- School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Furong Tang
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
- School of Electronic and Communication Engineering, Shenzhen Polytechnic, Shenzhen, China
| | - Lei Xu
- School of Electronic and Communication Engineering, Shenzhen Polytechnic, Shenzhen, China
| | - Ying Ju
- School of Informatics, Xiamen University, Xiamen, China
- *Correspondence: Ying Ju, ; Lei Shi,
| | - Lei Shi
- Department of Spine Surgery, Changzheng Hospital, Naval Medical University, Shanghai, China
- *Correspondence: Ying Ju, ; Lei Shi,
| |
Collapse
|
10
|
|
11
|
Local discriminant preservation projection embedded ensemble learning based dimensionality reduction of speech data of Parkinson’s disease. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102165] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
12
|
Wang P, Xiong S. Alternating Primal-Dual Algorithm for Minimizing Multiple-Summed Separable Problems with Application to Image Restoration. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001421540185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In order to discover the difference among dual strategies, we propose an alternating primal-dual algorithm (APDA) that can be considered as a general version for minimizing problem which is multiple-summed separable convex but not necessarily smooth. First, the original multiple-summed problem is transformed into two subproblems. Second, one subproblem is solved in the primal space and the other is solved in the dual space. Finally, the alternating direction method is executed between the primal and the dual part. Furthermore, the classical alternating direction method of multipliers (ADMM) is extended to solve the primal subproblem which is also multiple summed, therefore, the extended ADMM can be seen as a parallel method for the original problem. Thanks to the flexibility of APDA, different dual strategies for image restoration are analyzed. Numerical experiments show that the proposed method performs better than some existing algorithms in terms of both speed and accuracy.
Collapse
Affiliation(s)
- Peng Wang
- School of Computer Science and Technology, Wuhan University of Technology, Wuhan, 430070, P. R. China
- School of Mathematics and Computational Science, Wuyi University, Jiangmen 529020, P. R. China
| | - Shengwu Xiong
- School of Computer Science and Technology, Wuhan University of Technology, Wuhan, 430070, P. R. China
| |
Collapse
|
13
|
Xiao X, Chen Y, Gong YJ, Zhou Y. Low-Rank Preserving t-Linear Projection for Robust Image Feature Extraction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:108-120. [PMID: 33090953 DOI: 10.1109/tip.2020.3031813] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As the cornerstone for joint dimension reduction and feature extraction, extensive linear projection algorithms were proposed to fit various requirements. When being applied to image data, however, existing methods suffer from representation deficiency since the multi-way structure of the data is (partially) neglected. To solve this problem, we propose a novel Low-Rank Preserving t-Linear Projection (LRP-tP) model that preserves the intrinsic structure of the image data using t-product-based operations. The proposed model advances in four aspects: 1) LRP-tP learns the t-linear projection directly from the tensorial dataset so as to exploit the correlation among the multi-way data structure simultaneously; 2) to cope with the widely spread data errors, e.g., noise and corruptions, the robustness of LRP-tP is enhanced via self-representation learning; 3) LRP-tP is endowed with good discriminative ability by integrating the empirical classification error into the learning procedure; 4) an adaptive graph considering the similarity and locality of the data is jointly learned to precisely portray the data affinity. We devise an efficient algorithm to solve the proposed LRP-tP model using the alternating direction method of multipliers. Extensive experiments on image feature extraction have demonstrated the superiority of LRP-tP compared to the state-of-the-arts.
Collapse
|
14
|
|
15
|
|
16
|
Liu Z, Wang J, Liu G, Zhang L. Discriminative low-rank preserving projection for dimensionality reduction. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105768] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
17
|
Liu Z, Ou W, Lu W, Wang L. Discriminative feature extraction based on sparse and low-rank representation. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.073] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Ren Z, Sun Q, Wu B, Zhang X, Yan W. Learning Latent Low-Rank and Sparse Embedding for Robust Image Feature Extraction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2094-2107. [PMID: 31502975 DOI: 10.1109/tip.2019.2938859] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
To defy the curse of dimensionality, the inputs are always projected from the original high-dimensional space into the target low-dimension space for feature extraction. However, due to the existence of noise and outliers, the feature extraction task for corrupted data is still a challenging problem. Recently, a robust method called low rank embedding (LRE) was proposed. Despite the success of LRE in experimental studies, it also has many disadvantages: 1) The learned projection cannot quantitatively interpret the importance of features. 2) LRE does not perform data reconstruction so that the features may not be capable of holding the main energy of the original "clean" data. 3) LRE explicitly transforms error into the target space. 4) LRE is an unsupervised method, which is only suitable for unsupervised scenarios. To address these problems, in this paper, we propose a novel method to exploit the latent discriminative features. In particular, we first utilize an orthogonal matrix to hold the main energy of the original data. Next, we introduce an l2,1 -norm term to encourage the features to be more compact, discriminative and interpretable. Then, we enforce a columnwise l2,1 -norm constraint on an error component to resist noise. Finally, we integrate a classification loss term into the objective function to fit supervised scenarios. Our method performs better than several state-of-the-art methods in terms of effectiveness and robustness, as demonstrated on six publicly available datasets.
Collapse
|
19
|
Chen WJ, Li CN, Shao YH, Zhang J, Deng NY. 2DRLPP: Robust two-dimensional locality preserving projection with regularization. Knowl Based Syst 2019. [DOI: 10.1016/j.knosys.2019.01.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|