1
|
Lin JQ, Chen MS, Zhu XR, Wang CD, Zhang H. Dual Information Enhanced Multiview Attributed Graph Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:6466-6477. [PMID: 38814767 DOI: 10.1109/tnnls.2024.3401449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
Multiview attributed graph clustering is an important approach to partition multiview data based on the attribute characteristics and adjacent matrices from different views. Some attempts have been made in using graph neural network (GNN), which have achieved promising clustering performance. Despite this, few of them pay attention to the inherent specific information embedded in multiple views. Meanwhile, they are incapable of recovering the latent high-level representation from the low-level ones, greatly limiting the downstream clustering performance. To fill these gaps, a novel dual information enhanced multiview attributed graph clustering (DIAGC) method is proposed in this article. Specifically, the proposed method introduces the specific information reconstruction (SIR) module to disentangle the explorations of the consensus and specific information from multiple views, which enables graph convolutional network (GCN) to capture the more essential low-level representations. Besides, the contrastive learning (CL) module maximizes the agreement between the latent high-level representation and low-level ones and enables the high-level representation to satisfy the desired clustering structure with the help of the self-supervised clustering (SC) module. Extensive experiments on several real-world benchmarks demonstrate the effectiveness of the proposed DIAGC method compared with the state-of-the-art baselines.
Collapse
|
2
|
Tang J, Lai Y, Liu X. Multiview Spectral Clustering Based on Consensus Neighbor Strategy. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18661-18673. [PMID: 37819821 DOI: 10.1109/tnnls.2023.3319823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
Multiview spectral clustering, renowned for its spatial learning capability, has garnered significant attention in the data mining field. However, existing methods assume that the optimal consensus adjacency matrix is confined within the space spanned by each view's adjacency matrix. This constraint restricts the feasible domain of the algorithm and hinders the exploration of the optimal consensus adjacency matrix. To address this limitation, we propose a novel and convex strategy, termed the consensus neighbor strategy, for learning the optimal consensus adjacency matrix. This approach constructs the optimal consensus adjacency matrix by capturing the consensus local structure of each sample across all views, thereby expanding the search space and facilitating the discovery of the optimal consensus adjacency matrix. Furthermore, we introduce the concept of a correlation measuring matrix to prevent trivial solution. We develop an efficient iterative algorithm to solve the resulting optimization problem, benefitting from the convex nature of our model, which ensures convergence to a global optimum. Experimental results on 16 multiview datasets demonstrate that our proposed algorithm surpasses state-of-the-art methods in terms of its robust consensus representation learning capability. The code of this article is uploaded to https://github.com/PhdJiayiTang/Consensus-Neighbor-Strategy.git.
Collapse
|
3
|
Dong Z, Jin J, Xiao Y, Wang S, Zhu X, Liu X, Zhu E. Iterative Deep Structural Graph Contrast Clustering for Multiview Raw Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18272-18284. [PMID: 37738196 DOI: 10.1109/tnnls.2023.3313692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Multiview clustering has attracted increasing attention to automatically divide instances into various groups without manual annotations. Traditional shadow methods discover the internal structure of data, while deep multiview clustering (DMVC) utilizes neural networks with clustering-friendly data embeddings. Although both of them achieve impressive performance in practical applications, we find that the former heavily relies on the quality of raw features, while the latter ignores the structure information of data. To address the above issue, we propose a novel method termed iterative deep structural graph contrast clustering (IDSGCC) for multiview raw data consisting of topology learning (TL), representation learning (RL), and graph structure contrastive learning to achieve better performance. The TL module aims to obtain a structured global graph with constraint structural information and then guides the RL to preserve the structural information. In the RL module, graph convolutional network (GCN) takes the global structural graph and raw features as inputs to aggregate the samples of the same cluster and keep the samples of different clusters away. Unlike previous methods performing contrastive learning at the representation level of the samples, in the graph contrastive learning module, we conduct contrastive learning at the graph structure level by imposing a regularization term on the similarity matrix. The credible neighbors of the samples are constructed as positive pairs through the credible graph, and other samples are constructed as negative pairs. The three modules promote each other and finally obtain clustering-friendly embedding. Also, we set up an iterative update mechanism to update the topology to obtain a more credible topology. Impressive clustering results are obtained through the iterative mechanism. Comparative experiments on eight multiview datasets show that our model outperforms the state-of-the-art traditional and deep clustering competitors.
Collapse
|
4
|
Wang J, Tang C, Wan Z, Zhang W, Sun K, Zomaya AY. Efficient and Effective One-Step Multiview Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12224-12235. [PMID: 37028351 DOI: 10.1109/tnnls.2023.3253246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multiview clustering algorithms have attracted intensive attention and achieved superior performance in various fields recently. Despite the great success of multiview clustering methods in realistic applications, we observe that most of them are difficult to apply to large-scale datasets due to their cubic complexity. Moreover, they usually use a two-stage scheme to obtain the discrete clustering labels, which inevitably causes a suboptimal solution. In light of this, an efficient and effective one-step multiview clustering (E2OMVC) method is proposed to directly obtain clustering indicators with a small-time burden. Specifically, according to the anchor graphs, the smaller similarity graph of each view is constructed, from which the low-dimensional latent features are generated to form the latent partition representation. By introducing a label discretization mechanism, the binary indicator matrix can be directly obtained from the unified partition representation which is formed by fusing all latent partition representations from different views. In addition, by coupling the fusion of all latent information and the clustering task into a joint framework, the two processes can help each other and obtain a better clustering result. Extensive experimental results demonstrate that the proposed method can achieve comparable or better performance than the state-of-the-art methods. The demo code of this work is publicly available at https://github.com/WangJun2023/EEOMVC.
Collapse
|
5
|
Lan W, Yang T, Chen Q, Zhang S, Dong Y, Zhou H, Pan Y. Multiview Subspace Clustering via Low-Rank Symmetric Affinity Graph. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11382-11395. [PMID: 37015132 DOI: 10.1109/tnnls.2023.3260258] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multiview subspace clustering (MVSC) has been used to explore the internal structure of multiview datasets by revealing unique information from different views. Most existing methods ignore the consistent information and angular information of different views. In this article, we propose a novel MVSC via low-rank symmetric affinity graph (LSGMC) to tackle these problems. Specifically, considering the consistent information, we pursue a consistent low-rank structure across views by decomposing the coefficient matrix into three factors. Then, the symmetry constraint is utilized to guarantee weight consistency for each pair of data samples. In addition, considering the angular information, we utilize the fusion mechanism to capture the inherent structure of data. Furthermore, to alleviate the effect brought by the noise and the high redundant data, the Schatten p-norm is employed to obtain a low-rank coefficient matrix. Finally, an adaptive information reduction strategy is designed to generate a high-quality similarity matrix for spectral clustering. Experimental results on 11 datasets demonstrate the superiority of LSGMC in clustering performance compared with ten state-of-the-art multiview clustering methods.
Collapse
|
6
|
Yang X, Che H, Leung MF, Wen S. Self-paced regularized adaptive multi-view unsupervised feature selection. Neural Netw 2024; 175:106295. [PMID: 38614023 DOI: 10.1016/j.neunet.2024.106295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 03/14/2024] [Accepted: 04/05/2024] [Indexed: 04/15/2024]
Abstract
Multi-view unsupervised feature selection (MUFS) is an efficient approach for dimensional reduction of heterogeneous data. However, existing MUFS approaches mostly assign the samples the same weight, thus the diversity of samples is not utilized efficiently. Additionally, due to the presence of various regularizations, the resulting MUFS problems are often non-convex, making it difficult to find the optimal solutions. To address this issue, a novel MUFS method named Self-paced Regularized Adaptive Multi-view Unsupervised Feature Selection (SPAMUFS) is proposed. Specifically, the proposed approach firstly trains the MUFS model with simple samples, and gradually learns complex samples by using self-paced regularizer. l2,p-norm (0
Collapse
Affiliation(s)
- Xuanhao Yang
- College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China.
| | - Hangjun Che
- College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China; Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Chongqing, 400715, China.
| | - Man-Fai Leung
- School of Computing and Information Science, Faculty of Science and Engineering, Anglia Ruskin University, Cambridge, UK.
| | - Shiping Wen
- Faculty of Engineering and Information Technology, Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW 2007, Australia.
| |
Collapse
|
7
|
Chen Y, Zhao YP, Wang S, Chen J, Zhang Z. Partial Tubal Nuclear Norm-Regularized Multiview Subspace Learning. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3777-3790. [PMID: 37058384 DOI: 10.1109/tcyb.2023.3263175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this article, a unified multiview subspace learning model, called partial tubal nuclear norm-regularized multiview subspace learning (PTN2MSL), was proposed for unsupervised multiview subspace clustering (MVSC), semisupervised MVSC, and multiview dimension reduction. Unlike most of the existing methods which treat the above three related tasks independently, PTN2MSL integrates the projection learning and the low-rank tensor representation to promote each other and mine their underlying correlations. Moreover, instead of minimizing the tensor nuclear norm which treats all singular values equally and neglects their differences, PTN2MSL develops the partial tubal nuclear norm (PTNN) as a better alternative solution by minimizing the partial sum of tubal singular values. The PTN2MSL method was applied to the above three multiview subspace learning tasks. It demonstrated that these tasks organically benefited from each other and PTN2MSL has achieved better performance in comparison to state-of-the-art methods.
Collapse
|
8
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
9
|
Wang S, Chen Y, Yi S, Chao G. Frobenius norm-regularized robust graph learning for multi-view subspace clustering. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03816-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Chen Y, Wang S, Peng C, Hua Z, Zhou Y. Generalized Nonconvex Low-Rank Tensor Approximation for Multi-View Subspace Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4022-4035. [PMID: 33784622 DOI: 10.1109/tip.2021.3068646] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The low-rank tensor representation (LRTR) has become an emerging research direction to boost the multi-view clustering performance. This is because LRTR utilizes not only the pairwise relation between data points, but also the view relation of multiple views. However, there is one significant challenge: LRTR uses the tensor nuclear norm as the convex approximation but provides a biased estimation of the tensor rank function. To address this limitation, we propose the generalized nonconvex low-rank tensor approximation (GNLTA) for multi-view subspace clustering. Instead of the pairwise correlation, GNLTA adopts the low-rank tensor approximation to capture the high-order correlation among multiple views and proposes the generalized nonconvex low-rank tensor norm to well consider the physical meanings of different singular values. We develop a unified solver to solve the GNLTA model and prove that under mild conditions, any accumulation point is a stationary point of GNLTA. Extensive experiments on seven commonly used benchmark databases have demonstrated that the proposed GNLTA achieves better clustering performance over state-of-the-art methods.
Collapse
|