1
|
Wan X, Liu J, Gan X, Liu X, Wang S, Wen Y, Wan T, Zhu E. One-Step Multi-View Clustering With Diverse Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5774-5786. [PMID: 38557633 DOI: 10.1109/tnnls.2024.3378194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Multi-View clustering has attracted broad attention due to its capacity to utilize consistent and complementary information among views. Although tremendous progress has been made recently, most existing methods undergo high complexity, preventing them from being applied to large-scale tasks. Multi-View clustering via matrix factorization is a representative to address this issue. However, most of them map the data matrices into a fixed dimension, limiting the model's expressiveness. Moreover, a range of methods suffers from a two-step process, i.e., multimodal learning and the subsequent k-means, inevitably causing a suboptimal clustering result. In light of this, we propose a one-step multi-view clustering with diverse representation (OMVCDR) method, which incorporates multi-view learning and k-means into a unified framework. Specifically, we first project original data matrices into various latent spaces to attain comprehensive information and auto-weight them in a self-supervised manner. Then, we directly use the information matrices under diverse dimensions to obtain consensus discrete clustering labels. The unified work of representation learning and clustering boosts the quality of the final results. Furthermore, we develop an efficient optimization algorithm with proven convergence to solve the resultant problem. Comprehensive experiments on various datasets demonstrate the promising clustering performance of our proposed method. The code is publicly available at https://github.com/wanxinhang/OMVCDR.
Collapse
|
2
|
Kong Z, Zhou R, Luo X, Zhao S, Ragin AB, Leow AD, He L. TGNet: tensor-based graph convolutional networks for multimodal brain network analysis. BioData Min 2024; 17:55. [PMID: 39639334 PMCID: PMC11622555 DOI: 10.1186/s13040-024-00409-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 11/21/2024] [Indexed: 12/07/2024] Open
Abstract
Multimodal brain network analysis enables a comprehensive understanding of neurological disorders by integrating information from multiple neuroimaging modalities. However, existing methods often struggle to effectively model the complex structures of multimodal brain networks. In this paper, we propose a novel tensor-based graph convolutional network (TGNet) framework that combines tensor decomposition with multi-layer GCNs to capture both the homogeneity and intricate graph structures of multimodal brain networks. We evaluate TGNet on four datasets-HIV, Bipolar Disorder (BP), and Parkinson's Disease (PPMI), Alzheimer's Disease (ADNI)-demonstrating that it significantly outperforms existing methods for disease classification tasks, particularly in scenarios with limited sample sizes. The robustness and effectiveness of TGNet highlight its potential for advancing multimodal brain network analysis. The code is available at https://github.com/rongzhou7/TGNet .
Collapse
Affiliation(s)
- Zhaoming Kong
- School of Software Engineering, South China University of Technology, 382 Waihuan Dong Road, Guangzhou, 510006, China
| | - Rong Zhou
- Department of Computer Science and Engineering, Lehigh University, 113 Research Drive, Bethlehem, 18015, PA, USA
| | - Xinwei Luo
- Department of Computer Science and Engineering, Lehigh University, 113 Research Drive, Bethlehem, 18015, PA, USA
| | - Songlin Zhao
- Department of Computer Science and Engineering, Lehigh University, 113 Research Drive, Bethlehem, 18015, PA, USA
| | - Ann B Ragin
- Department of Radiology, Northwestern University, 737 N. Michigan Avenue, Chicago, 60611, IL, USA
| | - Alex D Leow
- Department of Psychiatry, University of Illinois Chicago, 1601 W. Taylor Street, Chicago, 60612, IL, USA
| | - Lifang He
- Department of Computer Science and Engineering, Lehigh University, 113 Research Drive, Bethlehem, 18015, PA, USA.
| |
Collapse
|
3
|
Tang J, Lai Y, Liu X. Multiview Spectral Clustering Based on Consensus Neighbor Strategy. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18661-18673. [PMID: 37819821 DOI: 10.1109/tnnls.2023.3319823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
Multiview spectral clustering, renowned for its spatial learning capability, has garnered significant attention in the data mining field. However, existing methods assume that the optimal consensus adjacency matrix is confined within the space spanned by each view's adjacency matrix. This constraint restricts the feasible domain of the algorithm and hinders the exploration of the optimal consensus adjacency matrix. To address this limitation, we propose a novel and convex strategy, termed the consensus neighbor strategy, for learning the optimal consensus adjacency matrix. This approach constructs the optimal consensus adjacency matrix by capturing the consensus local structure of each sample across all views, thereby expanding the search space and facilitating the discovery of the optimal consensus adjacency matrix. Furthermore, we introduce the concept of a correlation measuring matrix to prevent trivial solution. We develop an efficient iterative algorithm to solve the resulting optimization problem, benefitting from the convex nature of our model, which ensures convergence to a global optimum. Experimental results on 16 multiview datasets demonstrate that our proposed algorithm surpasses state-of-the-art methods in terms of its robust consensus representation learning capability. The code of this article is uploaded to https://github.com/PhdJiayiTang/Consensus-Neighbor-Strategy.git.
Collapse
|
4
|
Du T, Zheng W, Xu X. Composite attention mechanism network for deep contrastive multi-view clustering. Neural Netw 2024; 176:106361. [PMID: 38723307 DOI: 10.1016/j.neunet.2024.106361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 03/22/2024] [Accepted: 04/29/2024] [Indexed: 06/17/2024]
Abstract
Contrastive learning-based deep multi-view clustering methods have become a mainstream solution for unlabeled multi-view data. These methods usually utilize a basic structure that combines autoencoder, contrastive learning, or/and MLP projectors to generate more representative latent representations for the final clustering stage. However, existing deep contrastive multi-view clustering ignores two key points: (i) the latent representations projecting from one or more layers of MLP or new representations directly obtained from autoencoder fail to mine inherent relationship inner-view or cross-views; (ii) more existing frameworks only employ a one or dual-contrastive learning module, i.e., view- or/and category-oriented, which may result in the lack of communication between latent representations and clustering assignments. This paper proposes a new composite attention framework for contrastive multi-view clustering to address the above two challenges. Our method learns latent representations utilizing composite attention structure, i.e., Hierarchical Transformer for each view and Shared Attention for all views, rather than simple MLP. As a result, the learned representations can simultaneously preserve important features inside the view and balance the contributions across views. In addition, we add a new communication loss in our new dual contrastive framework. The common semantics will be brought into clustering assignments by pushing clustering assignments closer to the fused latent representations. Therefore, our method will provide a higher quality of clustering assignments for the segmentation problem of unlabeled multi-view data. The extensive experiments on several real data demonstrate that the proposed method can achieve superior performance over many state-of-the-art clustering algorithms, especially the significant improvement of an average of 10% on datasets Caltech and its subsets according to accuracy.
Collapse
Affiliation(s)
- Tingting Du
- School of Computer Science, Guangdong University of Science and Technology, Dongguan, 523083, China.
| | - Wei Zheng
- School of Computer Science, Wuhan University, Wuhan, 430072, China.
| | - Xingang Xu
- School of Computer Science, Guangdong University of Science and Technology, Dongguan, 523083, China.
| |
Collapse
|
5
|
Zhang D, Huang H, Zhao Q, Zhou G. Generalized latent multi-view clustering with tensorized bipartite graph. Neural Netw 2024; 175:106282. [PMID: 38599137 DOI: 10.1016/j.neunet.2024.106282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Revised: 03/22/2024] [Accepted: 03/27/2024] [Indexed: 04/12/2024]
Abstract
Tensor-based multi-view spectral clustering algorithms use tensors to model the structure of multi-dimensional data to take advantage of the complementary information and high-order correlations embedded in the graph, thus achieving impressive clustering performance. However, these algorithms use linear models to obtain consensus, which prevents the learned consensus from adequately representing the nonlinear structure of complex data. In order to address this issue, we propose a method called Generalized Latent Multi-View Clustering with Tensorized Bipartite Graph (GLMC-TBG). Specifically, in this paper we introduce neural networks to learn highly nonlinear mappings that encode nonlinear structures in graphs into latent representations. In addition, multiple views share the same latent consensus through nonlinear interactions. In this way, a more comprehensive common representation from multiple views can be achieved. An Augmented Lagrangian Multiplier with Alternating Direction Minimization (ALM-ADM) framework is designed to optimize the model. Experiments on seven real-world data sets verify that the proposed algorithm is superior to state-of-the-art algorithms.
Collapse
Affiliation(s)
- Dongping Zhang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Guangdong Key Laboratory of IoT Information Technology, Guangdong University of Technology, Guangzhou 510006, China.
| | - Haonan Huang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou 510006, China; Guangdong-HongKong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, China.
| |
Collapse
|
6
|
Chen Y, Zhao YP, Wang S, Chen J, Zhang Z. Partial Tubal Nuclear Norm-Regularized Multiview Subspace Learning. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3777-3790. [PMID: 37058384 DOI: 10.1109/tcyb.2023.3263175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this article, a unified multiview subspace learning model, called partial tubal nuclear norm-regularized multiview subspace learning (PTN2MSL), was proposed for unsupervised multiview subspace clustering (MVSC), semisupervised MVSC, and multiview dimension reduction. Unlike most of the existing methods which treat the above three related tasks independently, PTN2MSL integrates the projection learning and the low-rank tensor representation to promote each other and mine their underlying correlations. Moreover, instead of minimizing the tensor nuclear norm which treats all singular values equally and neglects their differences, PTN2MSL develops the partial tubal nuclear norm (PTNN) as a better alternative solution by minimizing the partial sum of tubal singular values. The PTN2MSL method was applied to the above three multiview subspace learning tasks. It demonstrated that these tasks organically benefited from each other and PTN2MSL has achieved better performance in comparison to state-of-the-art methods.
Collapse
|
7
|
Peng C, Kang K, Chen Y, Kang Z, Chen C, Cheng Q. Fine-Grained Essential Tensor Learning for Robust Multi-View Spectral Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3145-3160. [PMID: 38656843 PMCID: PMC11810504 DOI: 10.1109/tip.2024.3388969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Multi-view subspace clustering (MVSC) has drawn significant attention in recent study. In this paper, we propose a novel approach to MVSC. First, the new method is capable of preserving high-order neighbor information of the data, which provides essential and complicated underlying relationships of the data that is not straightforwardly preserved by the first-order neighbors. Second, we design log-based nonconvex approximations to both tensor rank and tensor sparsity, which are effective and more accurate than the convex approximations. For the associated shrinkage problems, we provide elegant theoretical results for the closed-form solutions, for which the convergence is guaranteed by theoretical analysis. Moreover, the new approximations have some interesting properties of shrinkage effects, which are guaranteed by elegant theoretical results. Extensive experimental results confirm the effectiveness of the proposed method.
Collapse
|
8
|
Liao Q, Liu Q, Razak FA. Hypergraph regularized nonnegative triple decomposition for multiway data analysis. Sci Rep 2024; 14:9098. [PMID: 38643209 PMCID: PMC11032410 DOI: 10.1038/s41598-024-59300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 04/09/2024] [Indexed: 04/22/2024] Open
Abstract
Tucker decomposition is widely used for image representation, data reconstruction, and machine learning tasks, but the calculation cost for updating the Tucker core is high. Bilevel form of triple decomposition (TriD) overcomes this issue by decomposing the Tucker core into three low-dimensional third-order factor tensors and plays an important role in the dimension reduction of data representation. TriD, on the other hand, is incapable of precisely encoding similarity relationships for tensor data with a complex manifold structure. To address this shortcoming, we take advantage of hypergraph learning and propose a novel hypergraph regularized nonnegative triple decomposition for multiway data analysis that employs the hypergraph to model the complex relationships among the raw data. Furthermore, we develop a multiplicative update algorithm to solve our optimization problem and theoretically prove its convergence. Finally, we perform extensive numerical tests on six real-world datasets, and the results show that our proposed algorithm outperforms some state-of-the-art methods.
Collapse
Affiliation(s)
- Qingshui Liao
- Department of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia.
- School of Mathematical Sciences, Guizhou Normal University, Guiyang, 550025, People's Republic of China.
| | - Qilong Liu
- School of Mathematical Sciences, Guizhou Normal University, Guiyang, 550025, People's Republic of China
| | - Fatimah Abdul Razak
- Department of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia.
| |
Collapse
|
9
|
Senthilnath J, Nagaraj G, Sumanth Simha C, Kulkarni S, Thapa M, Indiramma M, Benediktsson JA. DRBM-ClustNet: A Deep Restricted Boltzmann-Kohonen Architecture for Data Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2560-2574. [PMID: 35857728 DOI: 10.1109/tnnls.2022.3190439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A Bayesian deep restricted Boltzmann-Kohonen architecture for data clustering termed deep restricted Boltzmann machine (DRBM)-ClustNet is proposed. This core-clustering engine consists of a DRBM for processing unlabeled data by creating new features that are uncorrelated and have large variance with each other. Next, the number of clusters is predicted using the Bayesian information criterion (BIC), followed by a Kohonen network (KN)-based clustering layer. The processing of unlabeled data is done in three stages for efficient clustering of the nonlinearly separable datasets. In the first stage, DRBM performs nonlinear feature extraction by capturing the highly complex data representation by projecting the feature vectors of d dimensions into n dimensions. Most clustering algorithms require the number of clusters to be decided a priori; hence, here, to automate the number of clusters in the second stage, we use BIC. In the third stage, the number of clusters derived from BIC forms the input for the KN, which performs clustering of the feature-extracted data obtained from the DRBM. This method overcomes the general disadvantages of clustering algorithms, such as the prior specification of the number of clusters, convergence to local optima, and poor clustering accuracy on nonlinear datasets. In this research, we use two synthetic datasets, 15 benchmark datasets from the UCI Machine Learning repository, and four image datasets to analyze the DRBM-ClustNet. The proposed framework is evaluated based on clustering accuracy and ranked against other state-of-the-art clustering methods. The obtained results demonstrate that the DRBM-ClustNet outperforms state-of-the-art clustering algorithms.
Collapse
|
10
|
Zhang C, Geng Y, Han Z, Liu Y, Fu H, Hu Q. Autoencoder in Autoencoder Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2263-2275. [PMID: 35839199 DOI: 10.1109/tnnls.2022.3189239] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Modeling complex correlations on multiview data is still challenging, especially for high-dimensional features with possible noise. To address this issue, we propose a novel unsupervised multiview representation learning (UMRL) algorithm, termed autoencoder in autoencoder networks (AE2-Nets). The proposed framework effectively encodes information from high-dimensional heterogeneous data into a compact and informative representation with the proposed bidirectional encoding strategy. Specifically, the proposed AE2-Nets conduct encoding in two directions: the inner-AE-networks extract view-specific intrinsic information (forward encoding), while the outer-AE-networks integrate this view-specific intrinsic information from different views into a latent representation (backward encoding). For the nested architecture, we further provide a probabilistic explanation and extension from hierarchical variational autoencoder. The forward-backward strategy flexibly addresses high-dimensional (noisy) features within each view and encodes complementarity across multiple views in a unified framework. Extensive results on benchmark datasets validate the advantages compared to the state-of-the-art algorithms.
Collapse
|
11
|
Zhou N, Choi KS, Chen B, Du Y, Liu J, Xu Y. Correntropy-Based Low-Rank Matrix Factorization With Constraint Graph Learning for Image Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10433-10446. [PMID: 35507622 DOI: 10.1109/tnnls.2022.3166931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel low-rank matrix factorization model for semisupervised image clustering. In order to alleviate the negative effect of outliers, the maximum correntropy criterion (MCC) is incorporated as a metric to build the model. To utilize the label information to improve the clustering results, a constraint graph learning framework is proposed to adaptively learn the local structure of the data by considering the label information. Furthermore, an iterative algorithm based on Fenchel conjugate (FC) and block coordinate update (BCU) is proposed to solve the model. The convergence properties of the proposed algorithm are analyzed, which shows that the algorithm exhibits both objective sequential convergence and iterate sequential convergence. Experiments are conducted on six real-world image datasets, and the proposed algorithm is compared with eight state-of-the-art methods. The results show that the proposed method can achieve better performance in most situations in terms of clustering accuracy and mutual information.
Collapse
|
12
|
Joint contrastive triple-learning for deep multi-view clustering. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2023.103284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
13
|
Xie D, Gao Q, Yang M. Enhanced tensor low-rank representation learning for multi-view clustering. Neural Netw 2023; 161:93-104. [PMID: 36738492 DOI: 10.1016/j.neunet.2023.01.037] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 09/27/2022] [Accepted: 01/24/2023] [Indexed: 01/30/2023]
Abstract
Multi-view subspace clustering (MSC), assuming the multi-view data are generated from a latent subspace, has attracted considerable attention in multi-view clustering. To recover the underlying subspace structure, a successful approach adopted recently is subspace clustering based on tensor nuclear norm (TNN). But there are some limitations to this approach that the existing TNN-based methods usually fail to exploit the intrinsic cluster structure and high-order correlations well, which leads to limited clustering performance. To address this problem, the main purpose of this paper is to propose a novel tensor low-rank representation (TLRR) learning method to perform multi-view clustering. First, we construct a 3rd-order tensor by organizing the features from all views, and then use the t-product in the tensor space to obtain the self-representation tensor of the tensorial data. Second, we use the ℓ1,2 norm to constrain the self-representation tensor to make it capture the class-specificity distribution, that is important for depicting the intrinsic cluster structure. And simultaneously, we rotate the self-representation tensor, and use the tensor singular value decomposition-based weighted TNN as a tighter tensor rank approximation to constrain the rotated tensor. For the challenged mathematical optimization problem, we present an effective optimization algorithm with a theoretical convergence guarantee and relatively low computation complexity. The constructed convergent sequence to the Karush-Kuhn-Tucker (KKT) critical point solution is mathematically validated in detail. We perform extensive experiments on four datasets and demonstrate that TLRR outperforms state-of-the-art multi-view subspace clustering methods.
Collapse
Affiliation(s)
- Deyan Xie
- School of Science and Information Science, Qingdao Agricultural University, Qingdao, China.
| | - Quanxue Gao
- School of Telecommunications Engineering, Xidian University, Xi'an, China.
| | - Ming Yang
- Mathematics department of the University of Evansville, Evansville, IN 47722, United States of America.
| |
Collapse
|
14
|
Pan B, Li C, Che H. Nonconvex low-rank tensor approximation with graph and consistent regularizations for multi-view subspace learning. Neural Netw 2023; 161:638-658. [PMID: 36827961 DOI: 10.1016/j.neunet.2023.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/27/2022] [Accepted: 02/09/2023] [Indexed: 02/16/2023]
Abstract
Multi-view clustering is widely used to improve clustering performance. Recently, the subspace clustering tensor learning method based on Markov chain is a crucial branch of multi-view clustering. Tensor learning is commonly used to apply tensor low-rank approximation to represent the relationships between data samples. However, most of the current tensor learning methods have the following shortcomings: the information of the local graph is not taken into account, the relationships between different views are not shown, and the existing tensor low-rank representation takes a biased tensor rank function for estimation. Therefore, a nonconvex low-rank tensor approximation with graph and consistent regularizations (NLRTGC) model is proposed for multi-view subspace learning. NLRTGC retains the local manifold information through graph regularization, and adopts a consistent regularization between multi-views to keep the diagonal block structure of representation matrices. Furthermore, a nonnegative nonconvex low-rank tensor kernel function is used to replace the existing classical tensor nuclear norm via tensor-singular value decomposition (t-SVD), so as to reduce the deviation from rank. Then, an alternating direction method of multipliers (ADMM) which makes the objective function monotonically non-increasing is proposed to solve NLRTGC. Finally, the effectiveness and superiority of the NLRTGC are shown through abundant comparative experiments with various state-of-the-art algorithms on noisy datasets and real world datasets.
Collapse
Affiliation(s)
- Baicheng Pan
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
| | - Chuandong Li
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Hangjun Che
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
| |
Collapse
|
15
|
Sun X, Zhang X, Xu C, Xiao M, Tang Y. Tensorial Multiview Representation for Saliency Detection via Nonconvex Approach. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1816-1829. [PMID: 35025754 DOI: 10.1109/tcyb.2021.3139037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In the study of salient object detection, multiview features play an important role in identifying various underlying salient objects. As to current common patch-based methods, all different features are handled directly by stacking them into a high-dimensional vector to represent related image patches. These approaches ignore the correlations inhering in the original spatial structure, which may lead to the loss of certain underlying characterization such as view interaction. In this article, different from currently available approaches, a tensorial feature representation framework is developed for the salient object detection in order to better explore the complementary information of multiview features. Under the tensor framework, a tensor low-rank constraint is applied to the background to capture its intrinsic structure, a tensor group sparsity regularization is posed on the salient part, and a tensorial sliced Laplacian regularization is then introduced to enlarge the gap between the subspaces of the background and salient object. Moreover, a nonconvex tensor Log-determinant function, instead of the tensor nuclear norm, is adopted to approximate the tensor rank for effectively suppressing the confusing information resulted from underlying complex backgrounds. Further, we have deduced the closed-form solution of this nonconvex minimization problem and established a feasible algorithm whose convergence is mathematically proven. Experiments on five well-known public datasets are provided and the simulations demonstrate that our method outperforms the latest unsupervised handcrafted features-based methods in the literature. Furthermore, our model is flexible with various deep features and is competitive with the state-of-the-art approaches.
Collapse
|
16
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Logarithmic Schatten- p Norm Minimization for Tensorial Multi-View Subspace Clustering. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:3396-3410. [PMID: 35648873 DOI: 10.1109/tpami.2022.3179556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The low-rank tensor could characterize inner structure and explore high-order correlation among multi-view representations, which has been widely used in multi-view clustering. Existing approaches adopt the tensor nuclear norm (TNN) as a convex approximation of non-convex tensor rank function. However, TNN treats the different singular values equally and over-penalizes the main rank components, leading to sub-optimal tensor representation. In this paper, we devise a better surrogate of tensor rank, namely the tensor logarithmic Schatten- p norm ([Formula: see text]N), which fully considers the physical difference between singular values by the non-convex and non-linear penalty function. Further, a tensor logarithmic Schatten- p norm minimization ([Formula: see text]NM)-based multi-view subspace clustering ([Formula: see text]NM-MSC) model is proposed. Specially, the proposed [Formula: see text]NM can not only protect the larger singular values encoded with useful structural information, but also remove the smaller ones encoded with redundant information. Thus, the learned tensor representation with compact low-rank structure will well explore the complementary information and accurately characterize the high-order correlation among multi-views. The alternating direction method of multipliers (ADMM) is used to solve the non-convex multi-block [Formula: see text]NM-MSC model where the challenging [Formula: see text]NM problem is carefully handled. Importantly, the algorithm convergence analysis is mathematically established by showing that the sequence generated by the algorithm is of Cauchy and converges to a Karush-Kuhn-Tucker (KKT) point. Experimental results on nine benchmark databases reveal the superiority of the [Formula: see text]NM-MSC model.
Collapse
|
17
|
Liu BY, Huang L, Wang CD, Lai JH, Yu PS. Multiview Clustering via Proximity Learning in Latent Representation Space. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:973-986. [PMID: 34432638 DOI: 10.1109/tnnls.2021.3104846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Most existing multiview clustering methods are based on the original feature space. However, the feature redundancy and noise in the original feature space limit their clustering performance. Aiming at addressing this problem, some multiview clustering methods learn the latent data representation linearly, while performance may decline if the relation between the latent data representation and the original data is nonlinear. The other methods which nonlinearly learn the latent data representation usually conduct the latent representation learning and clustering separately, resulting in that the latent data representation might be not well adapted to clustering. Furthermore, none of them model the intercluster relation and intracluster correlation of data points, which limits the quality of the learned latent data representation and therefore influences the clustering performance. To solve these problems, this article proposes a novel multiview clustering method via proximity learning in latent representation space, named multiview latent proximity learning (MLPL). For one thing, MLPL learns the latent data representation in a nonlinear manner which takes the intercluster relation and intracluster correlation into consideration simultaneously. For another, through conducting the latent representation learning and consensus proximity learning simultaneously, MLPL learns a consensus proximity matrix with k connected components to output the clustering result directly. Extensive experiments are conducted on seven real-world datasets to demonstrate the effectiveness and superiority of the MLPL method compared with the state-of-the-art multiview clustering methods.
Collapse
|
18
|
Ma Z, Yu J, Wang L, Chen H, Zhao Y, He X, Wang Y, Song Y. Multi-view clustering based on view-attention driven. INT J MACH LEARN CYB 2023. [DOI: 10.1007/s13042-023-01787-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
19
|
Multiview nonnegative matrix factorization with dual HSIC constraints for clustering. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01742-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
20
|
Ji X, Yang L, Yao S, Zhao P, Li X. Fast and General Incomplete Multi-view Adaptive Clustering. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10079-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
21
|
He G, Wang H, Liu S, Zhang B. CSMVC: A Multiview Method for Multivariate Time-Series Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13425-13437. [PMID: 34469322 DOI: 10.1109/tcyb.2021.3083592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multivariate time-series (MTS) clustering is a fundamental technique in data mining with a wide range of real-world applications. To date, though some approaches have been developed, they suffer from various drawbacks, such as high computational cost or loss of information. Most existing approaches are single-view methods without considering the benefits of mutual-support multiple views. Moreover, due to its data structure, MTS data cannot be handled well by most multiview clustering methods. Toward this end, we propose a consistent and specific non-negative matrix factorization-based multiview clustering (CSMVC) method for MTS clustering. The proposed method constructs a multilayer graph to represent the original MTS data and generates multiple views with a subspace technique. The obtained multiview data are processed through a novel non-negative matrix factorization (NMF) method, which can explore the view-consistent and view-specific information simultaneously. Furthermore, an alternating optimization scheme is proposed to solve the corresponding optimization problem. We conduct extensive experiments on 13 benchmark datasets and the results demonstrate the superiority of our proposed method against other state-of-the-art algorithms under a wide range of evaluation metrics.
Collapse
|
22
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
23
|
Zhang Y, Huang Q, Zhang B, He S, Dan T, Peng H, Cai H. Deep Multiview Clustering via Iteratively Self-Supervised Universal and Specific Space Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11734-11746. [PMID: 34191743 DOI: 10.1109/tcyb.2021.3086153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multiview clustering seeks to partition objects via leveraging cross-view relations to provide a comprehensive description of the same objects. Most existing methods assume that different views are linear transformable or merely sampling from a common latent space. Such rigid assumptions betray reality, thus leading to unsatisfactory performance. To tackle the issue, we propose to learn both common and specific sampling spaces for each view to fully exploit their collaborative representations. The common space corresponds to the universal self-representation basis for all views, while the specific spaces are the view-specific basis accordingly. An iterative self-supervision scheme is conducted to strengthen the learned affinity matrix. The clustering is modeled by a convex optimization. We first solve its linear formulation by the popular scheme. Then, we employ the deep autoencoder structure to exploit its deep nonlinear formulation. The extensive experimental results on six real-world datasets demonstrate that the proposed model achieves uniform superiority over the benchmark methods.
Collapse
|
24
|
Kang Z, Lin Z, Zhu X, Xu W. Structured Graph Learning for Scalable Subspace Clustering: From Single View to Multiview. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8976-8986. [PMID: 33729977 DOI: 10.1109/tcyb.2021.3061660] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Graph-based subspace clustering methods have exhibited promising performance. However, they still suffer some of these drawbacks: they encounter the expensive time overhead, they fail to explore the explicit clusters, and cannot generalize to unseen data points. In this work, we propose a scalable graph learning framework, seeking to address the above three challenges simultaneously. Specifically, it is based on the ideas of anchor points and bipartite graph. Rather than building an n×n graph, where n is the number of samples, we construct a bipartite graph to depict the relationship between samples and anchor points. Meanwhile, a connectivity constraint is employed to ensure that the connected components indicate clusters directly. We further establish the connection between our method and the K -means clustering. Moreover, a model to process multiview data is also proposed, which is linearly scaled with respect to n . Extensive experiments demonstrate the efficiency and effectiveness of our approach with respect to many state-of-the-art clustering methods.
Collapse
|
25
|
Peng C, Zhang J, Chen Y, Xing X, Chen C, Kang Z, Guo L, Cheng Q. Preserving bilateral view structural information for subspace clustering. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
26
|
Xia W, Zhang X, Gao Q, Shu X, Han J, Gao X. Multiview Subspace Clustering by an Enhanced Tensor Nuclear Norm. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8962-8975. [PMID: 33635814 DOI: 10.1109/tcyb.2021.3052352] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Despite the promising preliminary results, tensor-singular value decomposition (t-SVD)-based multiview subspace is incapable of dealing with real problems, such as noise and illumination changes. The major reason is that tensor-nuclear norm minimization (TNNM) used in t-SVD regularizes each singular value equally, which does not make sense in matrix completion and coefficient matrix learning. In this case, the singular values represent different perspectives and should be treated differently. To well exploit the significant difference between singular values, we study the weighted tensor Schatten p -norm based on t-SVD and develop an efficient algorithm to solve the weighted tensor Schatten p -norm minimization (WTSNM) problem. After that, applying WTSNM to learn the coefficient matrix in multiview subspace clustering, we present a novel multiview clustering method by integrating coefficient matrix learning and spectral clustering into a unified framework. The learned coefficient matrix well exploits both the cluster structure and high-order information embedded in multiview views. The extensive experiments indicate the efficiency of our method in six metrics.
Collapse
|
27
|
Chen Y, Xiao X, Hua Z, Zhou Y. Adaptive Transition Probability Matrix Learning for Multiview Spectral Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4712-4726. [PMID: 33651701 DOI: 10.1109/tnnls.2021.3059874] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multiview clustering as an important unsupervised method has been gathering a great deal of attention. However, most multiview clustering methods exploit the self-representation property to capture the relationship among data, resulting in high computation cost in calculating the self-representation coefficients. In addition, they usually employ different regularizers to learn the representation tensor or matrix from which a transition probability matrix is constructed in a separate step, such as the one proposed by Wu et al.. Thus, an optimal transition probability matrix cannot be guaranteed. To solve these issues, we propose a unified model for multiview spectral clustering by directly learning an adaptive transition probability matrix (MCA2M), rather than an individual representation matrix of each view. Different from the one proposed by Wu et al., MCA2M utilizes the one-step strategy to directly learn the transition probability matrix under the robust principal component analysis framework. Unlike existing methods using the absolute symmetrization operation to guarantee the nonnegativity and symmetry of the affinity matrix, the transition probability matrix learned from MCA2M is nonnegative and symmetric without any postprocessing. An alternating optimization algorithm is designed based on the efficient alternating direction method of multipliers. Extensive experiments on several real-world databases demonstrate that the proposed method outperforms the state-of-the-art methods.
Collapse
|
28
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Rank Consistency Induced Multiview Subspace Clustering via Low-Rank Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3157-3170. [PMID: 33882005 DOI: 10.1109/tnnls.2021.3071797] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multiview subspace clustering has been demonstrated to achieve excellent performance in practice by exploiting multiview complementary information. One of the strategies used in most existing methods is to learn a shared self-expressiveness coefficient matrix for all the view data. Different from such a strategy, this article proposes a rank consistency induced multiview subspace clustering model to pursue a consistent low-rank structure among view-specific self-expressiveness coefficient matrices. To facilitate a practical model, we parameterize the low-rank structure on all self-expressiveness coefficient matrices through the tri-factorization along with orthogonal constraints. This specification ensures that self-expressiveness coefficient matrices of different views have the same rank to effectively promote structural consistency across multiviews. Such a model can learn a consistent subspace structure and fully exploit the complementary information from the view-specific self-expressiveness coefficient matrices, simultaneously. The proposed model is formulated as a nonconvex optimization problem. An efficient optimization algorithm with guaranteed convergence under mild conditions is proposed. Extensive experiments on several benchmark databases demonstrate the advantage of the proposed model over the state-of-the-art multiview clustering approaches.
Collapse
|
29
|
Fusing Local and Global Information for One-Step Multi-View Subspace Clustering. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105094] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Multi-view subspace clustering has drawn significant attention in the pattern recognition and machine learning research community. However, most of the existing multi-view subspace clustering methods are still limited in two aspects. (1) The subspace representation yielded by the self-expression reconstruction model ignores the local structure information of the data. (2) The construction of subspace representation and clustering are used as two individual procedures, which ignores their interactions. To address these problems, we propose a novel multi-view subspace clustering method fusing local and global information for one-step multi-view clustering. Our contribution lies in three aspects. First, we merge the graph learning into the self-expression model to explore the local structure information for constructing the specific subspace representations of different views. Second, we consider the multi-view information fusion by integrating these specific subspace representations into one common subspace representation. Third, we combine the subspace representation learning, multi-view information fusion, and clustering into a joint optimization model to realize the one-step clustering. We also develop an effective optimization algorithm to solve the proposed method. Comprehensive experimental results on nine popular multi-view data sets confirm the effectiveness and superiority of the proposed method by comparing it with many state-of-the-art multi-view clustering methods.
Collapse
|
30
|
Zhao N, Bu J. Robust multi-view subspace clustering based on consensus representation and orthogonal diversity. Neural Netw 2022; 150:102-111. [DOI: 10.1016/j.neunet.2022.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 02/21/2022] [Accepted: 03/04/2022] [Indexed: 10/18/2022]
|
31
|
Mi Y, Ren Z, Xu Z, Li H, Sun Q, Chen H, Dai J. Multi-view clustering with dual tensors. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06927-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
32
|
Yang J, Ma J, Win KT, Gao J, Yang Z. Low-rank and sparse representation based learning for cancer survivability prediction. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.10.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
33
|
A Robust Tensor-Based Submodule Clustering for Imaging Data Using l12 Regularization and Simultaneous Noise Recovery via Sparse and Low Rank Decomposition Approach. J Imaging 2021; 7:jimaging7120279. [PMID: 34940746 PMCID: PMC8708766 DOI: 10.3390/jimaging7120279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/09/2021] [Accepted: 12/11/2021] [Indexed: 11/25/2022] Open
Abstract
The massive generation of data, which includes images and videos, has made data management, analysis, information extraction difficult in recent years. To gather relevant information, this large amount of data needs to be grouped. Real-life data may be noise corrupted during data collection or transmission, and the majority of them are unlabeled, allowing for the use of robust unsupervised clustering techniques. Traditional clustering techniques, which vectorize the images, are unable to keep the geometrical structure of the images. Hence, a robust tensor-based submodule clustering method based on l12 regularization with improved clustering capability is formulated. The l12 induced tensor nuclear norm (TNN), integrated into the proposed method, offers better low rankness while retaining the self-expressiveness property of submodules. Unlike existing methods, the proposed method employs a simultaneous noise removal technique by twisting the lateral image slices of the input data tensor into frontal slices and eliminates the noise content in each image, using the principles of the sparse and low rank decomposition technique. Experiments are carried out over three datasets with varying amounts of sparse, Gaussian and salt and pepper noise. The experimental results demonstrate the superior performance of the proposed method over the existing state-of-the-art methods.
Collapse
|
34
|
Wang CD, Chen MS, Huang L, Lai JH, Yu PS. Smoothness Regularized Multiview Subspace Clustering With Kernel Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5047-5060. [PMID: 33027007 DOI: 10.1109/tnnls.2020.3026686] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multiview subspace clustering has attracted an increasing amount of attention in recent years. However, most of the existing multiview subspace clustering methods assume linear relations between multiview data points when learning the affinity representation by means of the self-expression or fail to preserve the locality property of the original feature space in the learned affinity representation. To address the above issues, in this article, we propose a new multiview subspace clustering method termed smoothness regularized multiview subspace clustering with kernel learning (SMSCK). To capture the nonlinear relations between multiview data points, the proposed model maps the concatenated multiview observations into a high-dimensional kernel space, in which the linear relations reflect the nonlinear relations between multiview data points in the original space. In addition, to explicitly preserve the locality property of the original feature space in the learned affinity representation, the smoothness regularization is deployed in the subspace learning in the kernel space. Theoretical analysis has been provided to ensure that the optimal solution of the proposed model meets the grouping effect. The unique optimal solution of the proposed model can be obtained by an optimization strategy and the theoretical convergence analysis is also conducted. Extensive experiments are conducted on both image and document data sets, and the comparison results with state-of-the-art methods demonstrate the effectiveness of our method.
Collapse
|
35
|
Wang H, Han G, Zhang B, Tao G, Cai H. Multi-View Learning a Decomposable Affinity Matrix via Tensor Self-Representation on Grassmann Manifold. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8396-8409. [PMID: 34587010 DOI: 10.1109/tip.2021.3114995] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multi-view clustering aims to partition objects into potential categories by utilizing cross-view information. One of the core issues is to sufficiently leverage different views to learn a latent subspace, within which the clustering task is performed. Recently, it has been shown that representing the multi-view data by a tensor and then learning a latent self-expressive tensor is effective. However, early works mainly focus on learning essential tensor representation from multi-view data and the resulted affinity matrix is considered as a byproduct or is computed by a simple average in Euclidean space, thereby destroying the intrinsic clustering structure. To that end, here we proposed a novel multi-view clustering method to directly learn a well-structured affinity matrix driven by the clustering task on Grassmann manifold. Specifically, we firstly employed a tensor learning model to unify multiple feature spaces into a latent low-rank tensor space. Then each individual view was merged on Grassmann manifold to obtain both an integrative subspace and a consensus affinity matrix, driven by clustering task. The two parts are modeled by a unified objective function and optimized jointly to mine a decomposable affinity matrix. Extensive experiments on eight real-world datasets show that our method achieves superior performances over other popular methods.
Collapse
|
36
|
Hierarchical high-order co-clustering algorithm by maximizing modularity. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01375-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
37
|
Liang N, Yang Z, Li Z, Xie S, Sun W. Semi-supervised multi-view learning by using label propagation based non-negative matrix factorization. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107244] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
38
|
Xie M, Ye Z, Pan G, Liu X. Incomplete multi-view subspace clustering with adaptive instance-sample mapping and deep feature fusion. APPL INTELL 2021. [DOI: 10.1007/s10489-020-02138-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
39
|
Wang H, Han G, Li J, Zhang B, Chen J, Hu Y, Han C, Cai H. Learning task-driving affinity matrix for accurate multi-view clustering through tensor subspace learning. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.02.054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
40
|
|
41
|
|
42
|
Yang Z, Liang N, Yan W, Li Z, Xie S. Uniform Distribution Non-Negative Matrix Factorization for Multiview Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3249-3262. [PMID: 32386175 DOI: 10.1109/tcyb.2020.2984552] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multiview data processing has attracted sustained attention as it can provide more information for clustering. To integrate this information, one often utilizes the non-negative matrix factorization (NMF) scheme which can reduce the data from different views into the subspace with the same dimension. Motivated by the clustering performance being affected by the distribution of the data in the learned subspace, a tri-factorization-based NMF model with an embedding matrix is proposed in this article. This model tends to generate decompositions with uniform distribution, such that the learned representations are more discriminative. As a result, the obtained consensus matrix can be a better representative of the multiview data in the subspace, leading to higher clustering performance. Also, a new lemma is proposed to provide the formulas about the partial derivation of the trace function with respect to an inner matrix, together with its theoretical proof. Based on this lemma, a gradient-based algorithm is developed to solve the proposed model, and its convergence and computational complexity are analyzed. Experiments on six real-world datasets are performed to show the advantages of the proposed algorithm, with comparison to the existing baseline methods.
Collapse
|
43
|
Chen Y, Wang S, Peng C, Hua Z, Zhou Y. Generalized Nonconvex Low-Rank Tensor Approximation for Multi-View Subspace Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4022-4035. [PMID: 33784622 DOI: 10.1109/tip.2021.3068646] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The low-rank tensor representation (LRTR) has become an emerging research direction to boost the multi-view clustering performance. This is because LRTR utilizes not only the pairwise relation between data points, but also the view relation of multiple views. However, there is one significant challenge: LRTR uses the tensor nuclear norm as the convex approximation but provides a biased estimation of the tensor rank function. To address this limitation, we propose the generalized nonconvex low-rank tensor approximation (GNLTA) for multi-view subspace clustering. Instead of the pairwise correlation, GNLTA adopts the low-rank tensor approximation to capture the high-order correlation among multiple views and proposes the generalized nonconvex low-rank tensor norm to well consider the physical meanings of different singular values. We develop a unified solver to solve the GNLTA model and prove that under mild conditions, any accumulation point is a stationary point of GNLTA. Extensive experiments on seven commonly used benchmark databases have demonstrated that the proposed GNLTA achieves better clustering performance over state-of-the-art methods.
Collapse
|
44
|
Zhang X, Ren Z, Sun H, Bai K, Feng X, Liu Z. Multiple kernel low-rank representation-based robust multi-view subspace clustering. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.10.059] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
45
|
Li X, Zhou K, Li C, Zhang X, Liu Y, Wang Y. Multi-view clustering via neighbor domain correlation learning. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05185-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
46
|
Zhou H, Yin H, Li Y, Chai Y. Multiview clustering via exclusive non-negative subspace learning and constraint propagation. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.11.037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
47
|
Xiao X, Chen Y, Gong YJ, Zhou Y. Prior Knowledge Regularized Multiview Self-Representation and its Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1325-1338. [PMID: 32310792 DOI: 10.1109/tnnls.2020.2984625] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To learn the self-representation matrices/tensor that encodes the intrinsic structure of the data, existing multiview self-representation models consider only the multiview features and, thus, impose equal membership preference across samples. However, this is inappropriate in real scenarios since the prior knowledge, e.g., explicit labels, semantic similarities, and weak-domain cues, can provide useful insights into the underlying relationship of samples. Based on this observation, this article proposes a prior knowledge regularized multiview self-representation (P-MVSR) model, in which the prior knowledge, multiview features, and high-order cross-view correlation are jointly considered to obtain an accurate self-representation tensor. The general concept of "prior knowledge" is defined as the complement of multiview features, and the core of P-MVSR is to take advantage of the membership preference, which is derived from the prior knowledge, to purify and refine the discovered membership of the data. Moreover, P-MVSR adopts the same optimization procedure to handle different prior knowledge and, thus, provides a unified framework for weakly supervised clustering and semisupervised classification. Extensive experiments on real-world databases demonstrate the effectiveness of the proposed P-MVSR model.
Collapse
|
48
|
Zhao W, Xu C, Guan Z, Liu Y. Multiview Concept Learning Via Deep Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:814-825. [PMID: 32275617 DOI: 10.1109/tnnls.2020.2979532] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multiview representation learning (MVRL) leverages information from multiple views to obtain a common representation summarizing the consistency and complementarity in multiview data. Most previous matrix factorization-based MVRL methods are shallow models that neglect the complex hierarchical information. The recently proposed deep multiview factorization models cannot explicitly capture consistency and complementarity in multiview data. We present the deep multiview concept learning (DMCL) method, which hierarchically factorizes the multiview data, and tries to explicitly model consistent and complementary information and capture semantic structures at the highest abstraction level. We explore two variants of the DMCL framework, DMCL-L and DMCL-N, with respectively linear/nonlinear transformations between adjacent layers. We propose two block coordinate descent-based optimization methods for DMCL-L and DMCL-N. We verify the effectiveness of DMCL on three real-world data sets for both clustering and classification tasks.
Collapse
|
49
|
Xiao X, Chen Y, Gong YJ, Zhou Y. Low-Rank Preserving t-Linear Projection for Robust Image Feature Extraction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:108-120. [PMID: 33090953 DOI: 10.1109/tip.2020.3031813] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As the cornerstone for joint dimension reduction and feature extraction, extensive linear projection algorithms were proposed to fit various requirements. When being applied to image data, however, existing methods suffer from representation deficiency since the multi-way structure of the data is (partially) neglected. To solve this problem, we propose a novel Low-Rank Preserving t-Linear Projection (LRP-tP) model that preserves the intrinsic structure of the image data using t-product-based operations. The proposed model advances in four aspects: 1) LRP-tP learns the t-linear projection directly from the tensorial dataset so as to exploit the correlation among the multi-way data structure simultaneously; 2) to cope with the widely spread data errors, e.g., noise and corruptions, the robustness of LRP-tP is enhanced via self-representation learning; 3) LRP-tP is endowed with good discriminative ability by integrating the empirical classification error into the learning procedure; 4) an adaptive graph considering the similarity and locality of the data is jointly learned to precisely portray the data affinity. We devise an efficient algorithm to solve the proposed LRP-tP model using the alternating direction method of multipliers. Extensive experiments on image feature extraction have demonstrated the superiority of LRP-tP compared to the state-of-the-arts.
Collapse
|
50
|
Xie D, Gao Q, Deng S, Yang X, Gao X. Multiple graphs learning with a new weighted tensor nuclear norm. Neural Netw 2020; 133:57-68. [PMID: 33125918 DOI: 10.1016/j.neunet.2020.10.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022]
Abstract
As an effective convex relaxation of the rank minimization model, the tensor nuclear norm minimization based multi-view clustering methods have been attracting more and more interest in recent years. However, most existing clustering methods regularize each singular value equally, restricting their capability and flexibility in tackling many practical problems, where the singular values should be treated differently. To address this problem, we propose a novel weighted tensor nuclear norm minimization (WTNNM) based method for multi-view spectral clustering. Specifically, we firstly calculate a set of transition probability matrices from different views, and construct a 3-order tensor whose lateral slices are composed of probability matrices. Secondly, we learn a latent high-order transition probability matrix by using our proposed weighted tensor nuclear norm, which directly considers the prior knowledge of singular values. Finally, clustering is performed on the learned transition probability matrix, which well characterizes both the complementary information and high-order information embedded in multi-view data. An efficient optimization algorithm is designed to solve the optimal solution. Extensive experiments on five benchmarks demonstrate that our method outperforms the state-of-the-art methods.
Collapse
Affiliation(s)
- Deyan Xie
- Qingdao Agricultural University, Qingdao, China; State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China.
| | - Quanxue Gao
- State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China.
| | - Siyang Deng
- State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China
| | - Xiaojun Yang
- Guangdong University of Technology, Guangzhou, China
| | - Xinbo Gao
- State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an, China
| |
Collapse
|