1
|
Zhou J, Zhang Q, Zeng S, Zhang B. Fuzzy Graph Subspace Convolutional Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5641-5655. [PMID: 36197860 DOI: 10.1109/tnnls.2022.3208557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Graph convolutional networks (GCNs) are a popular approach to learn the feature embedding of graph-structured data, which has shown to be highly effective as well as efficient in performing node classification in an inductive way. However, with massive nongraph-organized data existing in application scenarios nowadays, it is critical to exploit the relationships behind the given groups of data, which makes better use of GCN and broadens the application field. In this article, we propose the f uzzy g raph s ubspace c onvolutional n etwork (FGSCN) to provide a brand-new paradigm for feature embedding and node classification with graph convolution (GC) when given an arbitrary collection of data. The FGSCN performs GC on the f uzzy s ubspace ( F -space), which simultaneously learns from the underlying subspace information in the low-dimensional space as well as its neighborliness information in the high-dimensional space. In particular, we construct the fuzzy homogenous graph GF on the F -space by fusing the homogenous graph of neighborliness GN and homogenous graph of subspace GS (defined by the affinity matrix of the low-rank representation). Here, it is proven that the GC on F -space will propagate both the local and global information through fuzzy set theory. We evaluated FGSCN on 15 unique datasets with different tasks (e.g., feature embedding, visual recognition, etc.). The experimental results showed that the proposed FGSCN has significant superiority compared with current state-of-the-art methods.
Collapse
|
2
|
Zhou N, Choi KS, Chen B, Du Y, Liu J, Xu Y. Correntropy-Based Low-Rank Matrix Factorization With Constraint Graph Learning for Image Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10433-10446. [PMID: 35507622 DOI: 10.1109/tnnls.2022.3166931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel low-rank matrix factorization model for semisupervised image clustering. In order to alleviate the negative effect of outliers, the maximum correntropy criterion (MCC) is incorporated as a metric to build the model. To utilize the label information to improve the clustering results, a constraint graph learning framework is proposed to adaptively learn the local structure of the data by considering the label information. Furthermore, an iterative algorithm based on Fenchel conjugate (FC) and block coordinate update (BCU) is proposed to solve the model. The convergence properties of the proposed algorithm are analyzed, which shows that the algorithm exhibits both objective sequential convergence and iterate sequential convergence. Experiments are conducted on six real-world image datasets, and the proposed algorithm is compared with eight state-of-the-art methods. The results show that the proposed method can achieve better performance in most situations in terms of clustering accuracy and mutual information.
Collapse
|
3
|
Xu Z, Tian S, Abhadiomhen SE, Shen XJ. Robust multiview spectral clustering via cooperative manifold and low rank representation induced. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:24445-24464. [DOI: 10.1007/s11042-023-14557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 05/13/2022] [Accepted: 01/31/2023] [Indexed: 12/04/2024]
|
4
|
Qin Y, Feng G, Ren Y, Zhang X. Consistency-Induced Multiview Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:832-844. [PMID: 35476568 DOI: 10.1109/tcyb.2022.3165550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multiview clustering has received great attention and numerous subspace clustering algorithms for multiview data have been presented. However, most of these algorithms do not effectively handle high-dimensional data and fail to exploit consistency for the number of the connected components in similarity matrices for different views. In this article, we propose a novel consistency-induced multiview subspace clustering (CiMSC) to tackle these issues, which is mainly composed of structural consistency (SC) and sample assignment consistency (SAC). To be specific, SC aims to learn a similarity matrix for each single view wherein the number of connected components equals to the cluster number of the dataset. SAC aims to minimize the discrepancy for the number of connected components in similarity matrices from different views based on the SAC assumption, that is, different views should produce the same number of connected components in similarity matrices. CiMSC also formulates cluster indicator matrices for different views, and shared similarity matrices simultaneously in an optimization framework. Since each column of similarity matrix can be used as a new representation of the data point, CiMSC can learn an effective subspace representation for the high-dimensional data, which is encoded into the latent representation by reconstruction in a nonlinear manner. We employ an alternating optimization scheme to solve the optimization problem. Experiments validate the advantage of CiMSC over 12 state-of-the-art multiview clustering approaches, for example, the accuracy of CiMSC is 98.06% on the BBCSport dataset.
Collapse
|
5
|
Guo L, Zhang X, Zhang R, Wang Q, Xue X, Liu Z. Robust graph representation clustering based on adaptive data correction. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04268-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
6
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
7
|
Kun Q, Abhadiomhen SE, Liu Z. Multiview subspace clustering via low‐rank correlation analysis. IET COMPUTER VISION 2022. [DOI: 10.1049/cvi2.12155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 10/11/2022] [Indexed: 12/04/2024]
Abstract
AbstractIn order to explore multi‐view data, existing low‐rank‐based multi‐view subspace clustering methods seek a common low‐rank structure from different views. However, in real‐world scenarios, each view will often hold complex structures resulting from noise or outliers, causing unreliable and imprecise graphs, which the previous methods cannot effectively ameliorate. This study proposes a new method based on low‐rank correlation analysis to overcome these limitations. Firstly, the canonical correlation analysis strategy is introduced to jointly find the low‐rank structures in different views. In order to facilitate a robust solution, a dual regularisation term is further introduced to find such low‐rank structures that maximise the correlation in respective views much better. Thus, a unifying clustering structure is then integrated into the model to characterise the connections between different views adaptively. In this way, noise suppression is achieved more effectively. Furthermore, we avoid the uncertainty of spectral post‐processing of the unifying clustering structure by imposing a rank constraint on its Laplacian matrix to obtain the clustering results explicitly, further enhancing computation efficiency. Experimental results obtained from several clustering and classification experiments performed using 3Sources, Caltech101‐20, 100leaves, WebKB, and Hdigit datasets reveal the proposed method's superiority over compared state‐of‐the‐art methods in Accuracy, Normalised Mutual Information, and F‐score evaluation metrics.
Collapse
Affiliation(s)
- Qu Kun
- Jingjiang College Jiangsu University Zhenjiang Jiangsu China
| | - Stanley Ebhohimhen Abhadiomhen
- School of Computer Science and Communication Engineering JiangSu University Zhenjiang JiangSu China
- Department of Computer Science University of Nigeria Nsukka Nigeria
| | - Zhifeng Liu
- School of Computer Science and Communication Engineering JiangSu University Zhenjiang JiangSu China
| |
Collapse
|
8
|
Chen Y, Xiao X, Hua Z, Zhou Y. Adaptive Transition Probability Matrix Learning for Multiview Spectral Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4712-4726. [PMID: 33651701 DOI: 10.1109/tnnls.2021.3059874] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multiview clustering as an important unsupervised method has been gathering a great deal of attention. However, most multiview clustering methods exploit the self-representation property to capture the relationship among data, resulting in high computation cost in calculating the self-representation coefficients. In addition, they usually employ different regularizers to learn the representation tensor or matrix from which a transition probability matrix is constructed in a separate step, such as the one proposed by Wu et al.. Thus, an optimal transition probability matrix cannot be guaranteed. To solve these issues, we propose a unified model for multiview spectral clustering by directly learning an adaptive transition probability matrix (MCA2M), rather than an individual representation matrix of each view. Different from the one proposed by Wu et al., MCA2M utilizes the one-step strategy to directly learn the transition probability matrix under the robust principal component analysis framework. Unlike existing methods using the absolute symmetrization operation to guarantee the nonnegativity and symmetry of the affinity matrix, the transition probability matrix learned from MCA2M is nonnegative and symmetric without any postprocessing. An alternating optimization algorithm is designed based on the efficient alternating direction method of multipliers. Extensive experiments on several real-world databases demonstrate that the proposed method outperforms the state-of-the-art methods.
Collapse
|
9
|
Li Y, Zhou J, Tian J, Zheng X, Tang YY. Weighted Error Entropy-Based Information Theoretic Learning for Robust Subspace Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4228-4242. [PMID: 33606640 DOI: 10.1109/tnnls.2021.3056188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In most of the existing representation learning frameworks, the noise contaminating the data points is often assumed to be independent and identically distributed (i.i.d.), where the Gaussian distribution is often imposed. This assumption, though greatly simplifies the resulting representation problems, may not hold in many practical scenarios. For example, the noise in face representation is usually attributable to local variation, random occlusion, and unconstrained illumination, which is essentially structural, and hence, does not satisfy the i.i.d. property or the Gaussianity. In this article, we devise a generic noise model, referred to as independent and piecewise identically distributed (i.p.i.d.) model for robust presentation learning, where the statistical behavior of the underlying noise is characterized using a union of distributions. We demonstrate that our proposed i.p.i.d. model can better describe the complex noise encountered in practical scenarios and accommodate the traditional i.i.d. one as a special case. Assisted by the proposed noise model, we then develop a new information-theoretic learning framework for robust subspace representation through a novel minimum weighted error entropy criterion. Thanks to the superior modeling capability of the i.p.i.d. model, our proposed learning method achieves superior robustness against various types of noise. When applying our scheme to the subspace clustering and image recognition problems, we observe significant performance gains over the existing approaches.
Collapse
|
10
|
Jiang G, Wang H, Peng J, Chen D, Fu X. Learning interpretable shared space via rank constraint for multi-view clustering. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03778-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
11
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Rank Consistency Induced Multiview Subspace Clustering via Low-Rank Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3157-3170. [PMID: 33882005 DOI: 10.1109/tnnls.2021.3071797] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multiview subspace clustering has been demonstrated to achieve excellent performance in practice by exploiting multiview complementary information. One of the strategies used in most existing methods is to learn a shared self-expressiveness coefficient matrix for all the view data. Different from such a strategy, this article proposes a rank consistency induced multiview subspace clustering model to pursue a consistent low-rank structure among view-specific self-expressiveness coefficient matrices. To facilitate a practical model, we parameterize the low-rank structure on all self-expressiveness coefficient matrices through the tri-factorization along with orthogonal constraints. This specification ensures that self-expressiveness coefficient matrices of different views have the same rank to effectively promote structural consistency across multiviews. Such a model can learn a consistent subspace structure and fully exploit the complementary information from the view-specific self-expressiveness coefficient matrices, simultaneously. The proposed model is formulated as a nonconvex optimization problem. An efficient optimization algorithm with guaranteed convergence under mild conditions is proposed. Extensive experiments on several benchmark databases demonstrate the advantage of the proposed model over the state-of-the-art multiview clustering approaches.
Collapse
|
12
|
Xu Y, Chen S, Li J, Han Z, Yang J. Autoencoder-Based Latent Block-Diagonal Representation for Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:5408-5418. [PMID: 33206621 DOI: 10.1109/tcyb.2020.3031666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Block-diagonal representation (BDR) is an effective subspace clustering method. The existing BDR methods usually obtain a self-expression coefficient matrix from the original features by a shallow linear model. However, the underlying structure of real-world data is often nonlinear, thus those methods cannot faithfully reflect the intrinsic relationship among samples. To address this problem, we propose a novel latent BDR (LBDR) model to perform the subspace clustering on a nonlinear structure, which jointly learns an autoencoder and a BDR matrix. The autoencoder, which consists of a nonlinear encoder and a linear decoder, plays an important role to learn features from the nonlinear samples. Meanwhile, the learned features are used as a new dictionary for a linear model with block-diagonal regularization, which can ensure good performances for spectral clustering. Moreover, we theoretically prove that the learned features are located in the linear space, thus ensuring the effectiveness of the linear model using self-expression. Extensive experiments on various real-world datasets verify the superiority of our LBDR over the state-of-the-art subspace clustering approaches.
Collapse
|
13
|
|
14
|
Wei L, Zhang F, Chen Z, Zhou R, Zhu C. Subspace clustering via adaptive least square regression with smooth affinities. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Mi Y, Ren Z, Xu Z, Li H, Sun Q, Chen H, Dai J. Multi-view clustering with dual tensors. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06927-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
16
|
Zhong G, Shu T, Huang G, Yan X. Multi-view spectral clustering by simultaneous consensus graph learning and discretization. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107632] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
17
|
Wang T, Wu J, Zhang Z, Zhou W, Chen G, Liu S. Multi-scale graph attention subspace clustering network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
18
|
|
19
|
Lv J, Kang Z, Lu X, Xu Z. Pseudo-Supervised Deep Subspace Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:5252-5263. [PMID: 34033539 DOI: 10.1109/tip.2021.3079800] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance due to the powerful representation extracted using deep neural networks while prioritizing categorical separability. However, self-reconstruction loss of an AE ignores rich useful relation information and might lead to indiscriminative representation, which inevitably degrades the clustering performance. It is also challenging to learn high-level similarity without feeding semantic labels. Another unsolved problem facing DSC is the huge memory cost due to n×n similarity matrix, which is incurred by the self-expression layer between an encoder and decoder. To tackle these problems, we use pairwise similarity to weigh the reconstruction loss to capture local structure information, while a similarity is learned by the self-expression layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain knowledge acquired during network training, are further employed to supervise similarity learning. Joint learning and iterative training facilitate to obtain an overall optimal solution. Extensive experiments on benchmark datasets demonstrate the superiority of our approach. By combining with the k -nearest neighbors algorithm, we further show that our method can address the large-scale and out-of-sample problems. The source code of our method is available: https://github.com/sckangz/SelfsupervisedSC.
Collapse
|
20
|
Xu J, Yu M, Shao L, Zuo W, Meng D, Zhang L, Zhang D. Scaled Simplex Representation for Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1493-1505. [PMID: 31634148 DOI: 10.1109/tcyb.2019.2943691] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The self-expressive property of data points, that is, each data point can be linearly represented by the other data points in the same subspace, has proven effective in leading subspace clustering (SC) methods. Most self-expressive methods usually construct a feasible affinity matrix from a coefficient matrix, obtained by solving an optimization problem. However, the negative entries in the coefficient matrix are forced to be positive when constructing the affinity matrix via exponentiation, absolute symmetrization, or squaring operations. This consequently damages the inherent correlations among the data. Besides, the affine constraint used in these methods is not flexible enough for practical applications. To overcome these problems, in this article, we introduce a scaled simplex representation (SSR) for the SC problem. Specifically, the non-negative constraint is used to make the coefficient matrix physically meaningful, and the coefficient vector is constrained to be summed up to a scalar to make it more discriminative. The proposed SSR-based SC (SSRSC) model is reformulated as a linear equality-constrained problem, which is solved efficiently under the alternating direction method of multipliers framework. Experiments on benchmark datasets demonstrate that the proposed SSRSC algorithm is very efficient and outperforms the state-of-the-art SC methods on accuracy. The code can be found at https://github.com/csjunxu/SSRSC.
Collapse
|
21
|
Xiao X, Chen Y, Gong YJ, Zhou Y. Low-Rank Preserving t-Linear Projection for Robust Image Feature Extraction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:108-120. [PMID: 33090953 DOI: 10.1109/tip.2020.3031813] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As the cornerstone for joint dimension reduction and feature extraction, extensive linear projection algorithms were proposed to fit various requirements. When being applied to image data, however, existing methods suffer from representation deficiency since the multi-way structure of the data is (partially) neglected. To solve this problem, we propose a novel Low-Rank Preserving t-Linear Projection (LRP-tP) model that preserves the intrinsic structure of the image data using t-product-based operations. The proposed model advances in four aspects: 1) LRP-tP learns the t-linear projection directly from the tensorial dataset so as to exploit the correlation among the multi-way data structure simultaneously; 2) to cope with the widely spread data errors, e.g., noise and corruptions, the robustness of LRP-tP is enhanced via self-representation learning; 3) LRP-tP is endowed with good discriminative ability by integrating the empirical classification error into the learning procedure; 4) an adaptive graph considering the similarity and locality of the data is jointly learned to precisely portray the data affinity. We devise an efficient algorithm to solve the proposed LRP-tP model using the alternating direction method of multipliers. Extensive experiments on image feature extraction have demonstrated the superiority of LRP-tP compared to the state-of-the-arts.
Collapse
|
22
|
Wang L, Huang J, Yin M, Cai R, Hao Z. Block diagonal representation learning for robust subspace clustering. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.03.103] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
23
|
|
24
|
Zhong G, Pun CM. Subspace clustering by simultaneously feature selection and similarity learning. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105512] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
25
|
Boundary Matching and Interior Connectivity-Based Cluster Validity Anlysis. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10041337] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The evaluation of clustering results plays an important role in clustering analysis. However, the existing validity indices are limited to a specific clustering algorithm, clustering parameter, and assumption in practice. In this paper, we propose a novel validity index to solve the above problems based on two complementary measures: boundary points matching and interior points connectivity. Firstly, when any clustering algorithm is performed on a dataset, we extract all boundary points for the dataset and its partitioned clusters using a nonparametric metric. The measure of boundary points matching is computed. Secondly, the interior points connectivity of both the dataset and all the partitioned clusters are measured. The proposed validity index can evaluate different clustering results on the dataset obtained from different clustering algorithms, which cannot be evaluated by the existing validity indices at all. Experimental results demonstrate that the proposed validity index can evaluate clustering results obtained by using an arbitrary clustering algorithm and find the optimal clustering parameters.
Collapse
|
26
|
Du H, Ma L, Li G, Wang S. Low-rank graph preserving discriminative dictionary learning for image recognition. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2019.06.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
27
|
|
28
|
Structural constraint deep matrix factorization for sequential data clustering. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2019. [DOI: 10.1007/s41315-019-00106-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
29
|
|
30
|
Xie K, Liu W, Lai Y, Li W. Discriminative Low-Rank Subspace Learning with Nonconvex Penalty. INT J PATTERN RECOGN 2019. [DOI: 10.1142/s0218001419510066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Subspace learning has been widely utilized to extract discriminative features for classification task, such as face recognition, even when facial images are occluded or corrupted. However, the performance of most existing methods would be degraded significantly in the scenario of that data being contaminated with severe noise, especially when the magnitude of the gross corruption can be arbitrarily large. To this end, in this paper, a novel discriminative subspace learning method is proposed based on the well-known low-rank representation (LRR). Specifically, a discriminant low-rank representation and the projecting subspace are learned simultaneously, in a supervised way. To avoid the deviation from the original solution by using some relaxation, we adopt the Schatten [Formula: see text]-norm and [Formula: see text]-norm, instead of the nuclear norm and [Formula: see text]-norm, respectively. Experimental results on two famous databases, i.e. PIE and ORL, demonstrate that the proposed method achieves better classification scores than the state-of-the-art approaches.
Collapse
Affiliation(s)
- Kan Xie
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P. R. China
- Guangdong Key Laboratory of IoT Information Technolgy, Guangzhou 510006, P. R. China
| | - Wei Liu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P. R. China
- Guangdong Key Laboratory of IoT Information Technolgy, Guangzhou 510006, P. R. China
| | - Yue Lai
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P. R. China
- Guangdong Key Laboratory of IoT Information Technolgy, Guangzhou 510006, P. R. China
| | - Weijun Li
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P. R. China
- Guangdong Key Laboratory of IoT Information Technolgy, Guangzhou 510006, P. R. China
| |
Collapse
|
31
|
|
32
|
Jin T, Ji R, Gao Y, Sun X, Zhao X, Tao D. Correntropy-Induced Robust Low-Rank Hypergraph. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:2755-2769. [PMID: 30596578 DOI: 10.1109/tip.2018.2889960] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Hypergraph learning has been widely exploited in various image processing applications, due to its advantages in modeling the high-order information. Its efficacy highly depends on building an informative hypergraph structure to accurately and robustly formulate the underlying data correlation. However, the existing hypergraph learning methods are sensitive to non- Gaussian noise, which hurts the corresponding performance. In this paper, we present a noise-resistant hypergraph learning model, which provides superior robustness against various non- Gaussian noises. In particular, our model adopts low-rank representation to construct a hypergraph, which captures the globally linear data structure as well as preserving the grouping effect of highly-correlated data. We further introduce a correntropyinduced local metric to measure the reconstruction errors, which is particularly robust to non-Gaussian noises. Finally, the Frobenious-norm based regularization is proposed to combine with the low-rank regularizer, which enables our model to regularize the singular values of the coefficient matrix. By such, the non-zero coefficients are selected to generate a hyperedge set as well as the hyperedge weights. We have evaluated the proposed hypergraph model in the tasks of image clustering and semi-supervised image classification. Quantitatively, our scheme significantly enhances the performance of the state-of-the-art hypergraph models on several benchmark datasets.
Collapse
|
33
|
Chen H, Sun Y, Gao J, Hu Y, Yin B. Solving Partial Least Squares Regression via Manifold Optimization Approaches. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 30:588-600. [PMID: 29994619 DOI: 10.1109/tnnls.2018.2844866] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Partial least squares regression (PLSR) has been a popular technique to explore the linear relationship between two data sets. However, all existing approaches often optimize a PLSR model in Euclidean space and take a successive strategy to calculate all the factors one by one for keeping the mutually orthogonal PLSR factors. Thus, a suboptimal solution is often generated. To overcome the shortcoming, this paper takes statistically inspired modification of PLSR (SIMPLSR) as a representative of PLSR, proposes a novel approach to transform SIMPLSR into optimization problems on Riemannian manifolds, and develops corresponding optimization algorithms. These algorithms can calculate all the PLSR factors simultaneously to avoid any suboptimal solutions. Moreover, we propose sparse SIMPLSR on Riemannian manifolds, which is simple and intuitive. A number of experiments on classification problems have demonstrated that the proposed models and algorithms can get lower classification error rates compared with other linear regression methods in Euclidean space. We have made the experimental code public at https://github.com/Haoran2014.
Collapse
|