1
|
Wang Z, Hu D, Liu Z, Gao C, Wang Z. Iteratively Capped Reweighting Norm Minimization with Global Convergence Guarantee for Low-Rank Matrix Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; PP:1923-1940. [PMID: 40030450 DOI: 10.1109/tpami.2024.3512458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
In recent years, a large number of studies have shown that low rank matrix learning (LRML) has become a popular approach in machine learning and computer vision with many important applications, such as image inpainting, subspace clustering, and recommendation system. The latest LRML methods resort to using some surrogate functions as convex or nonconvex relaxation of the rank function. However, most of these methods ignore the difference between different rank components and can only yield suboptimal solutions. To alleviate this problem, in this paper we propose a novel nonconvex regularizer called capped reweighting norm minimization (CRNM), which not only considers the different contributions of different rank components, but also adaptively truncates sequential singular values. With it, a general LRML model is obtained. Meanwhile, under some mild conditions, the global optimum of CRNM regularized least squares subproblem can be easily obtained in closed-form. Through the analysis of the theoretical properties of CRNM, we develop a high computational efficiency optimization method with convergence guarantee to solve the general LRML model. More importantly, by using the Kurdyka-Łojasiewicz (KŁ) inequality, its local and global convergence properties are established. Finally, we show that the proposed nonconvex regularizer as well as the optimization approach are suitable for different low rank tasks, such as matrix completion and subspace clustering. Extensive experimental results demonstrate that the constructed models and methods provide significant advantages over several state-of-the-art low rank matrix leaning models and methods.
Collapse
|
2
|
Liu M, Palade V, Zheng Z. Learning the consensus and complementary information for large-scale multi-view clustering. Neural Netw 2024; 172:106103. [PMID: 38219678 DOI: 10.1016/j.neunet.2024.106103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/25/2023] [Accepted: 01/04/2024] [Indexed: 01/16/2024]
Abstract
The multi-view data clustering has attracted much interest from researchers, and the large-scale multi-view clustering has many important applications and significant research value. In this article, we fully make use of the consensus and complementary information, and exploit a bipartite graph to depict the duality relationship between original points and anchor points. To be specific, representative anchor points are selected for each view to construct corresponding anchor representation matrices, and all views' anchor points are utilized to construct a common representation matrix. Using anchor points also reduces the computation complexity. Next, the bipartite graph is built by fusing these representation matrices, and a Laplacian rank constraint is enforced on the bipartite graph. This will make the bipartite graph have k connected components to obtain accurate clustering labels, where the bipartite graph is specifically designed for a large-scale dataset problem. In addition, the anchor points are also updated by dictionary learning. The experimental results on the four benchmark image processing datasets have demonstrated superior performance of the proposed large-scale multi-view clustering algorithm over other state-of-the-art multi-view clustering algorithms.
Collapse
Affiliation(s)
- Maoshan Liu
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China.
| | - Vasile Palade
- Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 2TL, UK.
| | - Zhonglong Zheng
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China.
| |
Collapse
|
3
|
Abhadiomhen SE, Ezeora NJ, Ganaa ED, Nzeh RC, Adeyemo I, Uzo IU, Oguike O. Spectral type subspace clustering methods: multi-perspective analysis. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:47455-47475. [DOI: 10.1007/s11042-023-16846-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 08/22/2023] [Accepted: 09/04/2023] [Indexed: 12/04/2024]
|
4
|
Yang B, Zhang X, Nie F, Chen B, Wang F, Nan Z, Zheng N. ECCA: Efficient Correntropy-Based Clustering Algorithm With Orthogonal Concept Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7377-7390. [PMID: 35100124 DOI: 10.1109/tnnls.2022.3142806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
One of the hottest topics in unsupervised learning is how to efficiently and effectively cluster large amounts of unlabeled data. To address this issue, we propose an orthogonal conceptual factorization (OCF) model to increase clustering effectiveness by restricting the degree of freedom of matrix factorization. In addition, for the OCF model, a fast optimization algorithm containing only a few low-dimensional matrix operations is given to improve clustering efficiency, as opposed to the traditional CF optimization algorithm, which involves dense matrix multiplications. To further improve the clustering efficiency while suppressing the influence of the noises and outliers distributed in real-world data, an efficient correntropy-based clustering algorithm (ECCA) is proposed in this article. Compared with OCF, an anchor graph is constructed and then OCF is performed on the anchor graph instead of directly performing OCF on the original data, which can not only further improve the clustering efficiency but also inherit the advantages of the high performance of spectral clustering. In particular, the introduction of the anchor graph makes ECCA less sensitive to changes in data dimensions and still maintains high efficiency at higher data dimensions. Meanwhile, for various complex noises and outliers in real-world data, correntropy is introduced into ECCA to measure the similarity between the matrix before and after decomposition, which can greatly improve the clustering effectiveness and robustness. Subsequently, a novel and efficient half-quadratic optimization algorithm was proposed to quickly optimize the ECCA model. Finally, extensive experiments on different real-world datasets and noisy datasets show that ECCA can archive promising effectiveness and robustness while achieving tens to thousands of times the efficiency compared with other state-of-the-art baselines.
Collapse
|
5
|
Liu J, Li D, Zhao H, Gao L. Robust Discriminant Subspace Clustering With Adaptive Local Structure Embedding. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2466-2479. [PMID: 34487499 DOI: 10.1109/tnnls.2021.3106702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Unsupervised dimension reduction and clustering are frequently used as two separate steps to conduct clustering tasks in subspace. However, the two-step clustering methods may not necessarily reflect the cluster structure in the subspace. In addition, the existing subspace clustering methods do not consider the relationship between the low-dimensional representation and local structure in the input space. To address the above issues, we propose a robust discriminant subspace (RDS) clustering model with adaptive local structure embedding. Specifically, unlike the existing methods which incorporate dimension reduction and clustering via regularizer, thereby introducing extra parameters, RDS first integrates them into a unified matrix factorization (MF) model through theoretical proof. Furthermore, a similarity graph is constructed to learn the local structure. A constraint is imposed on the graph to guarantee that it has the same connected components with low-dimensional representation. In this spirit, the similarity graph serves as a tradeoff that adaptively balances the learning process between the low-dimensional space and the original space. Finally, RDS adopts the l2,1 -norm to measure the residual error, which enhances the robustness to noise. Using the property of the l2,1 -norm, RDS can be optimized efficiently without introducing more penalty terms. Experimental results on real-world benchmark datasets show that RDS can provide more interpretable clustering results and also outperform other state-of-the-art alternatives.
Collapse
|
6
|
Chen Y, Wang Z, Bai X. Fuzzy Sparse Subspace Clustering for Infrared Image Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:2132-2146. [PMID: 37018095 DOI: 10.1109/tip.2023.3263102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Infrared image segmentation is a challenging task, due to interference of complex background and appearance inhomogeneity of foreground objects. A critical defect of fuzzy clustering for infrared image segmentation is that the method treats image pixels or fragments in isolation. In this paper, we propose to adopt self-representation from sparse subspace clustering in fuzzy clustering, aiming to introduce global correlation information into fuzzy clustering. Meanwhile, to apply sparse subspace clustering for non-linear samples from an infrared image, we leverage membership from fuzzy clustering to improve conventional sparse subspace clustering. The contributions of this paper are fourfold. First, by introducing self-representation coefficients modeled in sparse subspace clustering based on high-dimensional features, fuzzy clustering is capable of utilizing global information to resist complex background as well as intensity inhomogeneity of objects, so as to improve clustering accuracy. Second, fuzzy membership is tactfully exploited in the sparse subspace clustering framework. Thereby, the bottleneck of conventional sparse subspace clustering methods, that they could be barely applied to nonlinear samples, can be surmounted. Third, as we integrate fuzzy clustering and subspace clustering in a unified framework, features from two different aspects are employed, contributing to precise clustering results. Finally, we further incorporate neighbor information into clustering, thus effectively solving the uneven intensity problem in infrared image segmentation. Experiments examine the feasibility of proposed methods on various infrared images. Segmentation results demonstrate the effectiveness and efficiency of the proposed methods, which proves the superiority compared to other fuzzy clustering methods and sparse space clustering methods.
Collapse
|
7
|
Chen H, Liu X. Reweighted multi-view clustering with tissue-like P system. PLoS One 2023; 18:e0269878. [PMID: 36763648 PMCID: PMC9917278 DOI: 10.1371/journal.pone.0269878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 05/29/2022] [Indexed: 02/12/2023] Open
Abstract
Multi-view clustering has received substantial research because of its ability to discover heterogeneous information in the data. The weight distribution of each view of data has always been difficult problem in multi-view clustering. In order to solve this problem and improve computational efficiency at the same time, in this paper, Reweighted multi-view clustering with tissue-like P system (RMVCP) algorithm is proposed. RMVCP performs a two-step operation on data. Firstly, each similarity matrix is constructed by self-representation method, and each view is fused to obtain a unified similarity matrix and the updated similarity matrix of each view. Subsequently, the updated similarity matrix of each view obtained in the first step is taken as the input, and then the view fusion operation is carried out to obtain the final similarity matrix. At the same time, Constrained Laplacian Rank (CLR) is applied to the final matrix, so that the clustering result is directly obtained without additional clustering steps. In addition, in order to improve the computational efficiency of the RMVCP algorithm, the algorithm is embedded in the framework of the tissue-like P system, and the computational efficiency can be improved through the computational parallelism of the tissue-like P system. Finally, experiments verify that the effectiveness of the RMVCP algorithm is better than existing state-of-the-art algorithms.
Collapse
Affiliation(s)
- Huijian Chen
- Business School, Shandong Normal University, Jinan, China
| | - Xiyu Liu
- Business School, Shandong Normal University, Jinan, China
- * E-mail:
| |
Collapse
|
8
|
Qu H, Zheng Y, Li L, Guo F. An Unsupervised Feature Extraction Approach Based on Self-Expression. BIG DATA 2023; 11:18-34. [PMID: 35537483 DOI: 10.1089/big.2021.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Feature extraction algorithms lack good interpretability during the projection learning. To solve this problem, an unsupervised feature extraction algorithm, that is, block diagonal projection (BDP), based on self-expression is proposed. Specifically, if the original data are projected into a low-dimensional subspace by a feature extraction algorithm, although the data may be more compact, the new features obtained may not be as explanatory as the original sample features. Therefore, by imposing L2,1 norm constraint on the projection matrix, the projection matrix can be of row sparsity. On one hand, discriminative features can be selected to make the projection matrix to be more interpretable. On the other hand, irrelevant or redundant features can be suppressed. The proposed model integrates feature extraction and selection into one framework. In addition, since self-expression can well excavate the correlation between samples or sample features, the unsupervised feature extraction task can be better guided using this property between them. At the same time, the block diagonal representation regular term is introduced to directly pursue the block diagonal representation. Thus, the accuracy of pattern recognition tasks such as clustering and classification can be improved. Finally, the effectiveness of BDP in linear dimensionality reduction and classification is proved on various reference datasets. The experimental results show that this algorithm is superior to previous feature extraction counterparts.
Collapse
Affiliation(s)
- Hongchun Qu
- College of Computer Science, Chongqing University of Posts and Telecommunications, Chongqing, China
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yangqi Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Lin Li
- College of Computer Science, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Fei Guo
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
9
|
Liu M, Wang Y, Palade V, Ji Z. Multi-View Subspace Clustering Network with Block Diagonal and Diverse Representation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2022.12.104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
10
|
Zhong G, Pun CM. Simultaneous Laplacian embedding and subspace clustering for incomplete multi-view data. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2022.110244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
11
|
Zhou Z, Ding C, Li J, Mohammadi E, Liu G, Yang Y, Wu QMJ. Sequential Order-Aware Coding-Based Robust Subspace Clustering for Human Action Recognition in Untrimmed Videos. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 32:13-28. [PMID: 36459602 DOI: 10.1109/tip.2022.3224877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Human action recognition (HAR) is one of most important tasks in video analysis. Since video clips distributed on networks are usually untrimmed, it is required to accurately segment a given untrimmed video into a set of action segments for HAR. As an unsupervised temporal segmentation technology, subspace clustering learns the codes from each video to construct an affinity graph, and then cuts the affinity graph to cluster the video into a set of action segments. However, most of the existing subspace clustering schemes not only ignore the sequential information of frames in code learning, but also the negative effects of noises when cutting the affinity graph, which lead to inferior performance. To address these issues, we propose a sequential order-aware coding-based robust subspace clustering (SOAC-RSC) scheme for HAR. By feeding the motion features of video frames into multi-layer neural networks, two expressive code matrices are learned in a sequential order-aware manner from unconstrained and constrained videos, respectively, to construct the corresponding affinity graphs. Then, with the consideration of the existence of noise effects, a simple yet robust cutting algorithm is proposed to cut the constructed affinity graphs to accurately obtain the action segments for HAR. The extensive experiments demonstrate the proposed SOAC-RSC scheme achieves the state-of-the-art performance on the datasets of Keck Gesture and Weizmann, and provides competitive performance on the other 6 public datasets such as UCF101 and URADL for HAR task, compared to the recent related approaches.
Collapse
|
12
|
Liu X, Du J, Ye ZS. A Covariate-regulated Sparse Subspace Learning Model and Its Application to Process Monitoring and Fault Isolation. Technometrics 2022. [DOI: 10.1080/00401706.2022.2156614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Affiliation(s)
- Xingchen Liu
- Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore
| | - Juan Du
- Smart Manufacturing Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
- Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, China
- Guangzhou HKUST Fok Ying Tung Research Institute, Guangzhou, China
| | - Zhi-Sheng Ye
- Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore
| |
Collapse
|
13
|
Zhao J, Wang X, Zou Q, Kang F, Peng J, Wang F. On improvability of hash clustering data from different sources by bipartite graph. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01125-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
14
|
Gao H, Lv C, Zhang T, Zhao H, Jiang L, Zhou J, Liu Y, Huang Y, Han C. A Structure Constraint Matrix Factorization Framework for Human Behavior Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12978-12988. [PMID: 34403350 DOI: 10.1109/tcyb.2021.3095357] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article presents a structure constraint matrix factorization framework for different behavior segmentation of the human behavior sequential data. This framework is based on the structural information of the behavior continuity and the high similarity between neighboring frames. Due to the high similarity and high dimensionality of human behavior data, the high-precision segmentation of human behavior is hard to achieve from the perspective of application and academia. By making the behavior continuity hypothesis, first, the effective constraint regular terms are constructed. Subsequently, the clustering framework based on constrained non-negative matrix factorization is established. Finally, the segmentation result can be obtained by using the spectral clustering and graph segmentation algorithm. For illustration, the proposed framework is applied to the Weiz dataset, Keck dataset, mo_86 dataset, and mo_86_9 dataset. Empirical experiments on several public human behavior datasets demonstrate that the structure constraint matrix factorization framework can automatically segment human behavior sequences. Compared to the classical algorithm, the proposed framework can ensure consistent segmentation of sequential points within behavior actions and provide better performance in accuracy.
Collapse
|
15
|
Game theory based Bi-domanial deep subspace clustering. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
16
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
17
|
Maggu J, Majumdar A. Kernelized transformed subspace clustering with geometric weights for non-linear manifolds. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Latent block diagonal representation for subspace clustering. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01101-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
19
|
Kang Z, Lin Z, Zhu X, Xu W. Structured Graph Learning for Scalable Subspace Clustering: From Single View to Multiview. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8976-8986. [PMID: 33729977 DOI: 10.1109/tcyb.2021.3061660] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Graph-based subspace clustering methods have exhibited promising performance. However, they still suffer some of these drawbacks: they encounter the expensive time overhead, they fail to explore the explicit clusters, and cannot generalize to unseen data points. In this work, we propose a scalable graph learning framework, seeking to address the above three challenges simultaneously. Specifically, it is based on the ideas of anchor points and bipartite graph. Rather than building an n×n graph, where n is the number of samples, we construct a bipartite graph to depict the relationship between samples and anchor points. Meanwhile, a connectivity constraint is employed to ensure that the connected components indicate clusters directly. We further establish the connection between our method and the K -means clustering. Moreover, a model to process multiview data is also proposed, which is linearly scaled with respect to n . Extensive experiments demonstrate the efficiency and effectiveness of our approach with respect to many state-of-the-art clustering methods.
Collapse
|
20
|
Wei L, Ji F, Liu H, Zhou R, Zhu C, Zhang X. Subspace Clustering via Structured Sparse Relation Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4610-4623. [PMID: 33667169 DOI: 10.1109/tnnls.2021.3059511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Due to the corruptions or noises that existed in real-world data sets, the affinity graphs constructed by the classical spectral clustering-based subspace clustering algorithms may not be able to reveal the intrinsic subspace structures of data sets faithfully. In this article, we reconsidered the data reconstruction problem in spectral clustering-based algorithms and proposed the idea of "relation reconstruction." We pointed out that a data sample could be represented by the neighborhood relation computed between its neighbors and itself. The neighborhood relation could indicate the true membership of its corresponding original data sample to the subspaces of a data set. We also claimed that a data sample's neighborhood relation could be reconstructed by the neighborhood relations of other data samples; then, we suggested a much different way to define affinity graphs consequently. Based on these propositions, a sparse relation representation (SRR) method was proposed for solving subspace clustering problems. Moreover, by introducing the local structure information of original data sets into SRR, an extension of SRR, namely structured sparse relation representation (SSRR) was presented. We gave an optimization algorithm for solving SRR and SSRR problems and analyzed its computation burden and convergence. Finally, plentiful experiments conducted on different types of databases showed the superiorities of SRR and SSRR.
Collapse
|
21
|
Li Y, Zhou J, Tian J, Zheng X, Tang YY. Weighted Error Entropy-Based Information Theoretic Learning for Robust Subspace Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4228-4242. [PMID: 33606640 DOI: 10.1109/tnnls.2021.3056188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In most of the existing representation learning frameworks, the noise contaminating the data points is often assumed to be independent and identically distributed (i.i.d.), where the Gaussian distribution is often imposed. This assumption, though greatly simplifies the resulting representation problems, may not hold in many practical scenarios. For example, the noise in face representation is usually attributable to local variation, random occlusion, and unconstrained illumination, which is essentially structural, and hence, does not satisfy the i.i.d. property or the Gaussianity. In this article, we devise a generic noise model, referred to as independent and piecewise identically distributed (i.p.i.d.) model for robust presentation learning, where the statistical behavior of the underlying noise is characterized using a union of distributions. We demonstrate that our proposed i.p.i.d. model can better describe the complex noise encountered in practical scenarios and accommodate the traditional i.i.d. one as a special case. Assisted by the proposed noise model, we then develop a new information-theoretic learning framework for robust subspace representation through a novel minimum weighted error entropy criterion. Thanks to the superior modeling capability of the i.p.i.d. model, our proposed learning method achieves superior robustness against various types of noise. When applying our scheme to the subspace clustering and image recognition problems, we observe significant performance gains over the existing approaches.
Collapse
|
22
|
Jia Y, Liu H, Hou J, Kwong S, Zhang Q. Semisupervised Affinity Matrix Learning via Dual-Channel Information Recovery. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7919-7930. [PMID: 33417578 DOI: 10.1109/tcyb.2020.3041493] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article explores the problem of semisupervised affinity matrix learning, that is, learning an affinity matrix of data samples under the supervision of a small number of pairwise constraints (PCs). By observing that both the matrix encoding PCs, called pairwise constraint matrix (PCM) and the empirically constructed affinity matrix (EAM), express the similarity between samples, we assume that both of them are generated from a latent affinity matrix (LAM) that can depict the ideal pairwise relation between samples. Specifically, the PCM can be thought of as a partial observation of the LAM, while the EAM is a fully observed one but corrupted with noise/outliers. To this end, we innovatively cast the semisupervised affinity matrix learning as the recovery of the LAM guided by the PCM and EAM, which is technically formulated as a convex optimization problem. We also provide an efficient algorithm for solving the resulting model numerically. Extensive experiments on benchmark datasets demonstrate the significant superiority of our method over state-of-the-art ones when used for constrained clustering and dimensionality reduction. The code is publicly available at https://github.com/jyh-learning/LAM.
Collapse
|
23
|
Bai L, Shao YH, Wang Z, Chen WJ, Deng NY. Multiple Flat Projections for Cross-Manifold Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7704-7718. [PMID: 33523821 DOI: 10.1109/tcyb.2021.3050487] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cross-manifold clustering is an extreme challenge learning problem. Since the low-density hypothesis is not satisfied in cross-manifold problems, many traditional clustering methods failed to discover the cross-manifold structures. In this article, we propose multiple flat projections clustering (MFPC) for cross-manifold clustering. In our MFPC, the given samples are projected into multiple localized flats to discover the global structures of implicit manifolds. Thus, the intersected clusters are distinguished in various projection flats. In MFPC, a series of nonconvex matrix optimization problems is solved by a proposed recursive algorithm. Furthermore, a nonlinear version of MFPC is extended via kernel tricks to deal with a more complex cross-manifold learning situation. The synthetic tests show that our MFPC works on the cross-manifold structures well. Moreover, experimental results on the benchmark datasets and object tracking videos show excellent performance of our MFPC compared with some state-of-the-art manifold clustering methods.
Collapse
|
24
|
Chen H, Wang W, Luo S. Coupled block diagonal regularization for multi-view subspace clustering. Data Min Knowl Discov 2022. [DOI: 10.1007/s10618-022-00852-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
Cai Y, Huang JZ, Yin J. A new method to build the adaptive k-nearest neighbors similarity graph matrix for spectral clustering. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Liang H, Guan HT, Abhadiomhen SE, Yan L. Robust Spectral Clustering via Low-Rank Sample Representation. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022; 2022:1-11. [DOI: 10.1155/2022/7540956] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2024] Open
Abstract
Traditional clustering methods neglect the data quality and perform clustering directly on the original data. Therefore, their performance can easily deteriorate since real-world data would usually contain noisy data samples in high-dimensional space. In order to resolve the previously mentioned problem, a new method is proposed, which builds on the approach of low-rank representation. The proposed approach first learns a low-rank coefficient matrix from data by exploiting the data’s self-expressiveness property. Then, a regularization term is introduced to ensure that the representation coefficient of two samples, which are similar in original high-dimensional space, is close to maintaining the samples’ neighborhood structure in the low-dimensional space. As a result, the proposed method obtains a clustering structure directly through the low-rank coefficient matrix to guarantee optimal clustering performance. A wide range of experiments shows that the proposed method is superior to compared state-of-the-art methods.
Collapse
Affiliation(s)
- Hao Liang
- Graduate School of Jiangsu University, Zhenjiang 212013, Jiangsu, China
| | - Hai-Tang Guan
- Haian Experimental High School, Nantong, Jiangsu, China
| | - Stanley Ebhohimhen Abhadiomhen
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, Jiangsu, China
- Department of Computer Science, University of Nigeria, Nsukka, Nigeria
| | - Li Yan
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, Jiangsu, China
| |
Collapse
|
27
|
Wei L, Zhang F, Chen Z, Zhou R, Zhu C. Subspace clustering via adaptive least square regression with smooth affinities. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Xu R, Wu J, Yue X, Li Y. Online Structural Change-point Detection of High-dimensional Streaming Data via Dynamic Sparse Subspace Learning. Technometrics 2022. [DOI: 10.1080/00401706.2022.2046171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Ruiyu Xu
- Department of Industrial Engineering and Management, Peking University
| | - Jianguo Wu
- Department of Industrial Engineering and Management, Peking University
| | - Xiaowei Yue
- Department of Industrial & Systems Engineering, Virginia Tech
| | - Yongxiang Li
- Department of Industrial Engineering and Management, Shanghai Jiao Tong University
| |
Collapse
|
29
|
Weighted sparse simplex representation: a unified framework for subspace clustering, constrained clustering, and active learning. Data Min Knowl Discov 2022. [DOI: 10.1007/s10618-022-00820-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
AbstractSpectral-based subspace clustering methods have proved successful in many challenging applications such as gene sequencing, image recognition, and motion segmentation. In this work, we first propose a novel spectral-based subspace clustering algorithm that seeks to represent each point as a sparse convex combination of a few nearby points. We then extend the algorithm to a constrained clustering and active learning framework. Our motivation for developing such a framework stems from the fact that typically either a small amount of labelled data are available in advance; or it is possible to label some points at a cost. The latter scenario is typically encountered in the process of validating a cluster assignment. Extensive experiments on simulated and real datasets show that the proposed approach is effective and competitive with state-of-the-art methods.
Collapse
|
30
|
Chen H, Tai X, Wang W. Multi-view subspace clustering with inter-cluster consistency and intra-cluster diversity among views. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02895-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Abhadiomhen SE, Wang Z, Shen X. Coupled low rank representation and subspace clustering. APPL INTELL 2022; 52:530-546. [DOI: 10.1007/s10489-021-02409-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2021] [Indexed: 10/21/2022]
|
32
|
Self-Supervised Convolutional Subspace Clustering Network with the Block Diagonal Regularizer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10563-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Subspace Clustering with Block Diagonal Sparse Representation. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10597-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
34
|
Xu D, Bai M, Long T, Gao J. LSTM-assisted evolutionary self-expressive subspace clustering. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01363-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
35
|
Wu F, Yuan P, Shi G, Li X, Dong W, Wu J. Robust subspace clustering network with dual-domain regularization. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
36
|
Abhadiomhen SE, Wang Z, Shen X, Fan J. Multiview Common Subspace Clustering via Coupled Low Rank Representation. ACM T INTEL SYST TEC 2021; 12:1-25. [DOI: 10.1145/3465056] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/01/2021] [Indexed: 10/20/2022]
Abstract
Multi-view subspace clustering (MVSC) finds a shared structure in latent low-dimensional subspaces of multi-view data to enhance clustering performance. Nonetheless, we observe that most existing MVSC methods neglect the diversity in multi-view data by considering only the common knowledge to find a shared structure either directly or by merging different similarity matrices learned for each view. In the presence of noise, this predefined shared structure becomes a biased representation of the different views. Thus, in this article, we propose a MVSC method based on coupled low-rank representation to address the above limitation. Our method first obtains a low-rank representation for each view, constrained to be a linear combination of the view-specific representation and the shared representation by simultaneously encouraging the sparsity of view-specific one. Then, it uses the
k
-block diagonal regularizer to learn a manifold recovery matrix for each view through respective low-rank matrices to recover more manifold structures from them. In this way, the proposed method can find an ideal similarity matrix by approximating clustering projection matrices obtained from the recovery structures. Hence, this similarity matrix denotes our clustering structure with exactly
k
connected components by applying a rank constraint on the similarity matrix’s relaxed Laplacian matrix to avoid spectral post-processing of the low-dimensional embedding matrix. The core of our idea is such that we introduce dynamic approximation into the low-rank representation to allow the clustering structure and the shared representation to guide each other to learn cleaner low-rank matrices that would lead to a better clustering structure. Therefore, our approach is notably different from existing methods in which the local manifold structure of data is captured in advance. Extensive experiments on six benchmark datasets show that our method outperforms 10 similar state-of-the-art compared methods in six evaluation metrics.
Collapse
Affiliation(s)
- Stanley Ebhohimhen Abhadiomhen
- School of Computer Science and Communication Engineering, Jiangsu University, China and Department of Computer Science, University of Nigeria, Nsukka, Nigeria
| | - Zhiyang Wang
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, Jiangsu, China
| | - Xiangjun Shen
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, Jiangsu, China
| | | |
Collapse
|
37
|
Jia Y, Wu W, Wang R, Hou J, Kwong S. Joint Optimization for Pairwise Constraint Propagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3168-3180. [PMID: 32745010 DOI: 10.1109/tnnls.2020.3009953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Constrained spectral clustering (SC) based on pairwise constraint propagation has attracted much attention due to the good performance. All the existing methods could be generally cast as the following two steps, i.e., a small number of pairwise constraints are first propagated to the whole data under the guidance of a predefined affinity matrix, and the affinity matrix is then refined in accordance with the resulting propagation and finally adopted for SC. Such a stepwise manner, however, overlooks the fact that the two steps indeed depend on each other, i.e., the two steps form a "chicken-egg" problem, leading to suboptimal performance. To this end, we propose a joint PCP model for constrained SC by simultaneously learning a propagation matrix and an affinity matrix. Especially, it is formulated as a bounded symmetric graph regularized low-rank matrix completion problem. We also show that the optimized affinity matrix by our model exhibits an ideal appearance under some conditions. Extensive experimental results in terms of constrained SC, semisupervised classification, and propagation behavior validate the superior performance of our model compared with state-of-the-art methods.
Collapse
|
38
|
Liu M, Wang Y, Sun J, Ji Z. Adaptive low-rank kernel block diagonal representation subspace clustering. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02396-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Zhong G, Pun CM. RPCA-induced self-representation for subspace clustering. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
40
|
Xu J, Yu M, Shao L, Zuo W, Meng D, Zhang L, Zhang D. Scaled Simplex Representation for Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1493-1505. [PMID: 31634148 DOI: 10.1109/tcyb.2019.2943691] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The self-expressive property of data points, that is, each data point can be linearly represented by the other data points in the same subspace, has proven effective in leading subspace clustering (SC) methods. Most self-expressive methods usually construct a feasible affinity matrix from a coefficient matrix, obtained by solving an optimization problem. However, the negative entries in the coefficient matrix are forced to be positive when constructing the affinity matrix via exponentiation, absolute symmetrization, or squaring operations. This consequently damages the inherent correlations among the data. Besides, the affine constraint used in these methods is not flexible enough for practical applications. To overcome these problems, in this article, we introduce a scaled simplex representation (SSR) for the SC problem. Specifically, the non-negative constraint is used to make the coefficient matrix physically meaningful, and the coefficient vector is constrained to be summed up to a scalar to make it more discriminative. The proposed SSR-based SC (SSRSC) model is reformulated as a linear equality-constrained problem, which is solved efficiently under the alternating direction method of multipliers framework. Experiments on benchmark datasets demonstrate that the proposed SSRSC algorithm is very efficient and outperforms the state-of-the-art SC methods on accuracy. The code can be found at https://github.com/csjunxu/SSRSC.
Collapse
|
41
|
Ren Z, Lei H, Sun Q, Yang C. Simultaneous learning coefficient matrix and affinity graph for multiple kernel clustering. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.08.056] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
42
|
Li X, Ren Z, Lei H, Huang Y, Sun Q. Multiple kernel clustering with pure graph learning scheme. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
43
|
Peng X, Feng J, Zhou JT, Lei Y, Yan S. Deep Subspace Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5509-5521. [PMID: 32078567 DOI: 10.1109/tnnls.2020.2968848] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we propose a deep extension of sparse subspace clustering, termed deep subspace clustering with L1-norm (DSC-L1). Regularized by the unit sphere distribution assumption for the learned deep features, DSC-L1 can infer a new data affinity matrix by simultaneously satisfying the sparsity principle of SSC and the nonlinearity given by neural networks. One of the appealing advantages brought by DSC-L1 is that when original real-world data do not meet the class-specific linear subspace distribution assumption, DSC-L1 can employ neural networks to make the assumption valid with its nonlinear transformations. Moreover, we prove that our neural network could sufficiently approximate the minimizer under mild conditions. To the best of our knowledge, this could be one of the first deep-learning-based subspace clustering methods. Extensive experiments are conducted on four real-world data sets to show that the proposed method is significantly superior to 17 existing methods for subspace clustering on handcrafted features and raw data.
Collapse
|
44
|
Peng X, Zhu H, Feng J, Shen C, Zhang H, Zhou JT. Deep Clustering With Sample-Assignment Invariance Prior. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4857-4868. [PMID: 31902782 DOI: 10.1109/tnnls.2019.2958324] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Most popular clustering methods map raw image data into a projection space in which the clustering assignment is obtained with the vanilla k-means approach. In this article, we discovered a novel prior, namely, there exists a common invariance when assigning an image sample to clusters using different metrics. In short, different distance metrics will lead to similar soft clustering assignments on the manifold. Based on such a novel prior, we propose a novel clustering method by minimizing the discrepancy between pairwise sample assignments for each data point. To the best of our knowledge, this could be the first work to reveal the sample-assignment invariance prior based on the idea of treating labels as ideal representations. Furthermore, the proposed method is one of the first end-to-end clustering approaches, which jointly learns clustering assignment and representation. Extensive experimental results show that the proposed method is remarkably superior to 16 state-of-the-art clustering methods on five image data sets in terms of four evaluation metrics.
Collapse
|
45
|
Semi-Supervised Multi-view clustering based on orthonormality-constrained nonnegative matrix factorization. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.05.073] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
46
|
Xiao X, Wei L. Robust Subspace Clustering via Latent Smooth Representation Clustering. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10306-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
47
|
Simultaneously learning feature-wise weights and local structures for multi-view subspace clustering. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106280] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
48
|
|
49
|
|
50
|
Zhou T, Zhang C, Peng X, Bhaskar H, Yang J. Dual Shared-Specific Multiview Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3517-3530. [PMID: 31226094 DOI: 10.1109/tcyb.2019.2918495] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multiview subspace clustering has received significant attention as the availability of diverse of multidomain and multiview real-world data has rapidly increased in the recent years. Boosting the performance of multiview clustering algorithms is challenged by two major factors. First, since original features from multiview data are highly redundant, reconstruction based on these attributes inevitably results in inferior performance. Second, since each view of such multiview data may contain unique knowledge as against the others, it remains a challenge to exploit complimentary information across multiple views while simultaneously investigating the uniqueness of each view. In this paper, we present a novel dual shared-specific multiview subspace clustering (DSS-MSC) approach that simultaneously learns the correlations between shared information across multiple views and also utilizes view-specific information to depict specific property for each independent view. Further, we formulate a dual learning framework to capture shared-specific information into the dimensional reduction and self-representation processes, which strengthens the ability of our approach to exploit shared information while preserving view-specific property effectively. The experimental results on several benchmark datasets have demonstrated the effectiveness of the proposed approach against other state-of-the-art techniques.
Collapse
|