1
|
Wan X, Xiao B, Liu X, Liu J, Liang W, Zhu E. Fast Continual Multi-View Clustering With Incomplete Views. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:2995-3008. [PMID: 38640047 DOI: 10.1109/tip.2024.3388974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/21/2024]
Abstract
Multi-view clustering (MVC) has attracted broad attention due to its capacity to exploit consistent and complementary information across views. This paper focuses on a challenging issue in MVC called the incomplete continual data problem (ICDP). Specifically, most existing algorithms assume that views are available in advance and overlook the scenarios where data observations of views are accumulated over time. Due to privacy considerations or memory limitations, previous views cannot be stored in these situations. Some works have proposed ways to handle this problem, but all of them fail to address incomplete views. Such an incomplete continual data problem (ICDP) in MVC is difficult to solve since incomplete information with continual data increases the difficulty of extracting consistent and complementary knowledge among views. We propose Fast Continual Multi-View Clustering with Incomplete Views (FCMVC-IV) to address this issue. Specifically, the method maintains a scalable consensus coefficient matrix and updates its knowledge with the incoming incomplete view rather than storing and recomputing all the data matrices. Considering that the given views are incomplete, the newly collected view might contain samples that have yet to appear; two indicator matrices and a rotation matrix are developed to match matrices with different dimensions. In addition, we design a three-step iterative algorithm to solve the resultant problem with linear complexity and proven convergence. Comprehensive experiments conducted on various datasets demonstrate the superiority of FCMVC-IV over the competing approaches. The code is publicly available at https://github.com/wanxinhang/FCMVC-IV.
Collapse
|
2
|
Lin Y, Chen S. Convex Subspace Clustering by Adaptive Block Diagonal Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10065-10078. [PMID: 35439144 DOI: 10.1109/tnnls.2022.3164540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Subspace clustering is a class of extensively studied clustering methods where the spectral-type approaches are its important subclass. Its key first step is to desire learning a representation coefficient matrix with block diagonal structure. To realize this step, many methods were successively proposed by imposing different structure priors on the coefficient matrix. These impositions can be roughly divided into two categories, i.e., indirect and direct. The former introduces the priors such as sparsity and low rankness to indirectly or implicitly learn the block diagonal structure. However, the desired block diagonalty cannot necessarily be guaranteed for noisy data. While the latter directly or explicitly imposes the block diagonal structure prior such as block diagonal representation (BDR) to ensure so-desired block diagonalty even if the data is noisy but at the expense of losing the convexity that the former's objective possesses. For compensating their respective shortcomings, in this article, we follow the direct line to propose adaptive BDR (ABDR) which explicitly pursues block diagonalty without sacrificing the convexity of the indirect one. Specifically, inspired by Convex BiClustering, ABDR coercively fuses both columns and rows of the coefficient matrix via a specially designed convex regularizer, thus naturally enjoying their merits and adaptively obtaining the number of blocks. Finally, experimental results on synthetic and real benchmarks demonstrate the superiority of ABDR to the state-of-the-arts (SOTAs).
Collapse
|
3
|
Ren J, Zhang Z, Fan J, Zhang H, Xu M, Wang M. Robust and fast low-rank deep convolutional feature recovery: toward information retention and accelerated convergence. Knowl Inf Syst 2022. [DOI: 10.1007/s10115-022-01795-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
4
|
Kang Z, Lin Z, Zhu X, Xu W. Structured Graph Learning for Scalable Subspace Clustering: From Single View to Multiview. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8976-8986. [PMID: 33729977 DOI: 10.1109/tcyb.2021.3061660] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Graph-based subspace clustering methods have exhibited promising performance. However, they still suffer some of these drawbacks: they encounter the expensive time overhead, they fail to explore the explicit clusters, and cannot generalize to unseen data points. In this work, we propose a scalable graph learning framework, seeking to address the above three challenges simultaneously. Specifically, it is based on the ideas of anchor points and bipartite graph. Rather than building an n×n graph, where n is the number of samples, we construct a bipartite graph to depict the relationship between samples and anchor points. Meanwhile, a connectivity constraint is employed to ensure that the connected components indicate clusters directly. We further establish the connection between our method and the K -means clustering. Moreover, a model to process multiview data is also proposed, which is linearly scaled with respect to n . Extensive experiments demonstrate the efficiency and effectiveness of our approach with respect to many state-of-the-art clustering methods.
Collapse
|
5
|
Wang Y, Gao C, Zhou J. Geometrical structure preservation joint with self-expression maintenance for adaptive graph learning. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
6
|
Zeng D, Wu Z, Ding C, Ren Z, Yang Q, Xie S. Labeled-Robust Regression: Simultaneous Data Recovery and Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:5026-5039. [PMID: 33151887 DOI: 10.1109/tcyb.2020.3026101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Rank minimization is widely used to extract low-dimensional subspaces. As a convex relaxation of the rank minimization, the problem of nuclear norm minimization has been attracting widespread attention. However, the standard nuclear norm minimization usually results in overcompression of data in all subspaces and eliminates the discrimination information between different categories of data. To overcome these drawbacks, in this article, we introduce the label information into the nuclear norm minimization problem and propose a labeled-robust principal component analysis (L-RPCA) to realize nuclear norm minimization on multisubspace data. Compared with the standard nuclear norm minimization, our method can effectively utilize the discriminant information in multisubspace rank minimization and avoid excessive elimination of local information and multisubspace characteristics of the data. Then, an effective labeled-robust regression (L-RR) method is proposed to simultaneously recover the data and labels of the observed data. Experiments on real datasets show that our proposed methods are superior to other state-of-the-art methods.
Collapse
|
7
|
Unsupervised robust discriminative subspace representation based on discriminative approximate isometric embedding. Neural Netw 2022; 155:287-307. [DOI: 10.1016/j.neunet.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 05/30/2022] [Accepted: 06/02/2022] [Indexed: 11/23/2022]
|
8
|
Peng Z, Liu H, Jia Y, Hou J. Adaptive Attribute and Structure Subspace Clustering Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:3430-3439. [PMID: 35511850 DOI: 10.1109/tip.2022.3171421] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep self-expressiveness-based subspace clustering methods have demonstrated effectiveness. However, existing works only consider the attribute information to conduct the self-expressiveness, limiting the clustering performance. In this paper, we propose a novel adaptive attribute and structure subspace clustering network (AASSC-Net) to simultaneously consider the attribute and structure information in an adaptive graph fusion manner. Specifically, we first exploit an auto-encoder to represent input data samples with latent features for the construction of an attribute matrix. We also construct a mixed signed and symmetric structure matrix to capture the local geometric structure underlying data samples. Then, we perform self-expressiveness on the constructed attribute and structure matrices to learn their affinity graphs separately. Finally, we design a novel attention-based fusion module to adaptively leverage these two affinity graphs to construct a more discriminative affinity graph. Extensive experimental results on commonly used benchmark datasets demonstrate that our AASSC-Net significantly outperforms state-of-the-art methods. In addition, we conduct comprehensive ablation studies to discuss the effectiveness of the designed modules. The code is publicly available at https://github.com/ZhihaoPENG-CityU/AASSC-Net.
Collapse
|
9
|
Gao W, Li X, Dai S, Yin X, Abhadiomhen SE. Recursive Sample Scaling Low-Rank Representation. JOURNAL OF MATHEMATICS 2021; 2021:1-14. [DOI: 10.1155/2021/2999001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2024]
Abstract
The low-rank representation (LRR) method has recently gained enormous popularity due to its robust approach in solving the subspace segmentation problem, particularly those concerning corrupted data. In this paper, the recursive sample scaling low-rank representation (RSS-LRR) method is proposed. The advantage of RSS-LRR over traditional LRR is that a cosine scaling factor is further introduced, which imposes a penalty on each sample to minimize noise and outlier influence better. Specifically, the cosine scaling factor is a similarity measure learned to extract each sample’s relationship with the low-rank representation’s principal components in the feature space. In order words, the smaller the angle between an individual data sample and the low-rank representation’s principal components, the more likely it is that the data sample is clean. Thus, the proposed method can then effectively obtain a good low-rank representation influenced mainly by clean data. Several experiments are performed with varying levels of corruption on ORL, CMU PIE, COIL20, COIL100, and LFW in order to evaluate RSS-LRR’s effectiveness over state-of-the-art low-rank methods. The experimental results show that RSS-LRR consistently performs better than the compared methods in image clustering and classification tasks.
Collapse
Affiliation(s)
- Wenyun Gao
- Nanjing LES Information Technology Co., LTD, Nanjing, China
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | - Xiaoyun Li
- Nanjing LES Information Technology Co., LTD, Nanjing, China
| | - Sheng Dai
- Nanjing LES Information Technology Co., LTD, Nanjing, China
| | - Xinghui Yin
- College of Computer and Information, Hohai University, Nanjing 211100, China
| | | |
Collapse
|
10
|
Yang Z, Xu Q, Cao X, Huang Q. Task-Feature Collaborative Learning with Application to Personalized Attribute Prediction. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:4094-4110. [PMID: 32356738 DOI: 10.1109/tpami.2020.2991344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As an effective learning paradigm against insufficient training samples, multi-task learning (MTL) encourages knowledge sharing across multiple related tasks so as to improve the overall performance. In MTL, a major challenge springs from the phenomenon that sharing the knowledge with dissimilar and hard tasks, known as negative transfer, often results in a worsened performance. Though a substantial amount of studies have been carried out against the negative transfer, most of the existing methods only model the transfer relationship as task correlations, with the transfer across features and tasks left unconsidered. Different from the existing methods, our goal is to alleviate negative transfer collaboratively across features and tasks. To this end, we propose a novel multi-task learning method called task-feature collaborative learning (TFCL). Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks and suppressing inter-group knowledge sharing. We then propose an optimization method for the model. Extensive theoretical analysis shows that our proposed method has the following benefits: (a) it enjoys the global convergence property and (b) it provides a block-diagonal structure recovery guarantee. As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks. We further apply it to the personalized attribute prediction problem with fine-grained modeling of user behaviors. Finally, experimental results on both simulated dataset and real-world datasets demonstrate the effectiveness of our proposed method.
Collapse
|
11
|
Zhang C, Li H. Low‐rank constrained weighted discriminative regression for multi‐view feature learning. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2021. [DOI: 10.1049/cit2.12018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Chao Zhang
- Department of Control and Systems Engineering Nanjing University Nanjing210093 China
| | - Huaxiong Li
- Department of Control and Systems Engineering Nanjing University Nanjing210093 China
| |
Collapse
|
12
|
Sharma K, Rameshan R. Image Set Classification Using a Distance-Based Kernel Over Affine Grassmann Manifold. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1082-1095. [PMID: 32275625 DOI: 10.1109/tnnls.2020.2980059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Modeling image sets or videos as linear subspaces is quite popular for classification problems in machine learning. However, affine subspace modeling has not been explored much. In this article, we address the image sets classification problem by modeling them as affine subspaces. Affine subspaces are linear subspaces shifted from origin by an offset. The collection of the same dimensional affine subspaces of [Formula: see text] is known as affine Grassmann manifold (AGM) or affine Grassmannian that is a smooth and noncompact manifold. The non-Euclidean geometry of AGM and the nonunique representation of an affine subspace in AGM make the classification task in AGM difficult. In this article, we propose a novel affine subspace-based kernel that maps the points in AGM to a finite-dimensional Hilbert space. For this, we embed the AGM in a higher dimensional Grassmann manifold (GM) by embedding the offset vector in the Stiefel coordinates. The projection distance between two points in AGM is the measure of similarity obtained by the kernel function. The obtained kernel-gram matrix is further diagonalized to generate low-dimensional features in the Euclidean space corresponding to the points in AGM. Distance-preserving constraint along with sparsity constraint is used for minimum residual error classification by keeping the locally Euclidean structure of AGM in mind. Experimentation performed over four data sets for gait, object, hand, and body gesture recognition shows promising results compared with state-of-the-art techniques.
Collapse
|
13
|
Hu Z, Nie F, Wang R, Li X. Low Rank Regularization: A review. Neural Netw 2020; 136:218-232. [PMID: 33246711 DOI: 10.1016/j.neunet.2020.09.021] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 08/08/2020] [Accepted: 09/28/2020] [Indexed: 11/20/2022]
Abstract
Low Rank Regularization (LRR), in essence, involves introducing a low rank or approximately low rank assumption to target we aim to learn, which has achieved great success in many data analysis tasks. Over the last decade, much progress has been made in theories and applications. Nevertheless, the intersection between these two lines is rare. In order to construct a bridge between practical applications and theoretical studies, in this paper we provide a comprehensive survey for LRR. Specifically, we first review the recent advances in two issues that all LRR models are faced with: (1) rank-norm relaxation, which seeks to find a relaxation to replace the rank minimization problem; (2) model optimization, which seeks to use an efficient optimization algorithm to solve the relaxed LRR models. For the first issue, we provide a detailed summarization for various relaxation functions and conclude that the non-convex relaxations can alleviate the punishment bias problem compared with the convex relaxations. For the second issue, we summarize the representative optimization algorithms used in previous studies, and analyze their advantages and disadvantages. As the main goal of this paper is to promote the application of non-convex relaxations, we conduct extensive experiments to compare different relaxation functions. The experimental results demonstrate that the non-convex relaxations generally provide a large advantage over the convex relaxations. Such a result is inspiring for further improving the performance of existing LRR models.
Collapse
Affiliation(s)
- Zhanxuan Hu
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| | - Feiping Nie
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| | - Rong Wang
- School of Cybersecurity, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China.
| | - Xuelong Li
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China; Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi'an, 710072, Shaanxi, PR China
| |
Collapse
|
14
|
Sun Y, Zhang Z, Jiang W, Zhang Z, Zhang L, Yan S, Wang M. Discriminative Local Sparse Representation by Robust Adaptive Dictionary Pair Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4303-4317. [PMID: 31944998 DOI: 10.1109/tnnls.2019.2954545] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we propose a structured robust adaptive dictionary pair learning (RA-DPL) framework for the discriminative sparse representation (SR) learning. To achieve powerful representation ability of the available samples, the setting of RA-DPL seamlessly integrates the robust projective DPL, locality-adaptive SRs, and discriminative coding coefficients learning into a unified learning framework. Specifically, RA-DPL improves existing projective DPL in four perspectives. First, it applies a sparse l2,1 -norm-based metric to encode the reconstruction error to deliver the robust projective dictionary pairs, and the l2,1 -norm has the potential to minimize the error. Second, it imposes the robust l2,1 -norm clearly on the analysis dictionary to ensure the sparse property of the coding coefficients rather than using the costly l0/l1 -norm. As such, the robustness of the data representation and the efficiency of the learning process are jointly considered to guarantee the efficacy of our RA-DPL. Third, RA-DPL conceives a structured reconstruction weight learning paradigm to preserve the local structures of the coding coefficients within each class clearly in an adaptive manner, which encourages to produce the locality preserving representations. Fourth, it also considers improving the discriminating ability of coding coefficients and dictionary by incorporating a discriminating function, which can ensure high intraclass compactness and interclass separation in the code space. Extensive experiments show that our RA-DPL can obtain superior performance over other state of the arts.
Collapse
|
15
|
Mutual-manifold regularized robust fast latent LRR for subspace recovery and learning. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04688-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Peng Y, Zhang L, Kong W, Qin F, Zhang J. Joint low-rank representation and spectral regression for robust subspace learning. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105723] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Liang Y, Ren Z, Wu Z, Zeng D, Li J. Scalable spectral ensemble clustering via building representative co-association matrix. Neurocomputing 2020; 390:158-167. [DOI: 10.1016/j.neucom.2020.01.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
18
|
Hu W, Li S, Zheng W, Lu Y, Yu G. Robust sequential subspace clustering via ℓ1-norm temporal graph. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|