1
|
Zhang D, Huang H, Zhao Q, Zhou G. Generalized latent multi-view clustering with tensorized bipartite graph. Neural Netw 2024; 175:106282. [PMID: 38599137 DOI: 10.1016/j.neunet.2024.106282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Revised: 03/22/2024] [Accepted: 03/27/2024] [Indexed: 04/12/2024]
Abstract
Tensor-based multi-view spectral clustering algorithms use tensors to model the structure of multi-dimensional data to take advantage of the complementary information and high-order correlations embedded in the graph, thus achieving impressive clustering performance. However, these algorithms use linear models to obtain consensus, which prevents the learned consensus from adequately representing the nonlinear structure of complex data. In order to address this issue, we propose a method called Generalized Latent Multi-View Clustering with Tensorized Bipartite Graph (GLMC-TBG). Specifically, in this paper we introduce neural networks to learn highly nonlinear mappings that encode nonlinear structures in graphs into latent representations. In addition, multiple views share the same latent consensus through nonlinear interactions. In this way, a more comprehensive common representation from multiple views can be achieved. An Augmented Lagrangian Multiplier with Alternating Direction Minimization (ALM-ADM) framework is designed to optimize the model. Experiments on seven real-world data sets verify that the proposed algorithm is superior to state-of-the-art algorithms.
Collapse
Affiliation(s)
- Dongping Zhang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Guangdong Key Laboratory of IoT Information Technology, Guangdong University of Technology, Guangzhou 510006, China.
| | - Haonan Huang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou 510006, China; Guangdong-HongKong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, China.
| |
Collapse
|
2
|
Zeng J, Zhou G, Qiu Y, Li C, Zhao Q. Bayesian tensor network structure search and its application to tensor completion. Neural Netw 2024; 175:106290. [PMID: 38626616 DOI: 10.1016/j.neunet.2024.106290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 03/05/2024] [Accepted: 04/02/2024] [Indexed: 04/18/2024]
Abstract
Tensor network (TN) has demonstrated remarkable efficacy in the compact representation of high-order data. In contrast to the TN methods with pre-determined structures, the recently introduced tensor network structure search (TNSS) methods automatically learn a compact TN structure from the data, gaining increasing attention. Nonetheless, TNSS requires time-consuming manual adjustments of the penalty parameters that control the model complexity to achieve better performance, especially in the presence of missing or noisy data. To provide an effective solution to this problem, in this paper, we propose a parameters tuning-free TNSS algorithm based on Bayesian modeling, aiming at conducting TNSS in a fully data-driven manner. Specifically, the uncertainty in the data corruption is well-incorporated in the prior setting of the probabilistic model. For TN structure determination, we reframe it as a rank learning problem of the fully-connected tensor network (FCTN), integrating the generalized inverse Gaussian (GIG) distribution for low-rank promotion. To eliminate the need for hyperparameter tuning, we adopt a fully Bayesian approach and propose an efficient Markov chain Monte Carlo (MCMC) algorithm for posterior distribution sampling. Compared with the previous TNSS method, experiment results demonstrate the proposed algorithm can effectively and efficiently find the latent TN structures of the data under various missing and noise conditions and achieves the best recovery results. Furthermore, our method exhibits superior performance in tensor completion with real-world data compared to other state-of-the-art tensor-decomposition-based completion methods.
Collapse
Affiliation(s)
- Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou, 510006, China.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou, 510006, China.
| | - Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| | - Chao Li
- Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| |
Collapse
|
3
|
Huang H, Zhou G, Zhao Q, He L, Xie S. Comprehensive Multiview Representation Learning via Deep Autoencoder-Like Nonnegative Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5953-5967. [PMID: 37672378 DOI: 10.1109/tnnls.2023.3304626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Learning a comprehensive representation from multiview data is crucial in many real-world applications. Multiview representation learning (MRL) based on nonnegative matrix factorization (NMF) has been widely adopted by projecting high-dimensional space into a lower order dimensional space with great interpretability. However, most prior NMF-based MRL techniques are shallow models that ignore hierarchical information. Although deep matrix factorization (DMF)-based methods have been proposed recently, most of them only focus on the consistency of multiple views and have cumbersome clustering steps. To address the above issues, in this article, we propose a novel model termed deep autoencoder-like NMF for MRL (DANMF-MRL), which obtains the representation matrix through the deep encoding stage and decodes it back to the original data. In this way, through a DANMF-based framework, we can simultaneously consider the multiview consistency and complementarity, allowing for a more comprehensive representation. We further propose a one-step DANMF-MRL, which learns the latent representation and final clustering labels matrix in a unified framework. In this approach, the two steps can negotiate with each other to fully exploit the latent clustering structure, avoid previous tedious clustering steps, and achieve optimal clustering performance. Furthermore, two efficient iterative optimization algorithms are developed to solve the proposed models both with theoretical convergence analysis. Extensive experiments on five benchmark datasets demonstrate the superiority of our approaches against other state-of-the-art MRL methods.
Collapse
|
4
|
Liao Q, Liu Q, Razak FA. Hypergraph regularized nonnegative triple decomposition for multiway data analysis. Sci Rep 2024; 14:9098. [PMID: 38643209 PMCID: PMC11032410 DOI: 10.1038/s41598-024-59300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 04/09/2024] [Indexed: 04/22/2024] Open
Abstract
Tucker decomposition is widely used for image representation, data reconstruction, and machine learning tasks, but the calculation cost for updating the Tucker core is high. Bilevel form of triple decomposition (TriD) overcomes this issue by decomposing the Tucker core into three low-dimensional third-order factor tensors and plays an important role in the dimension reduction of data representation. TriD, on the other hand, is incapable of precisely encoding similarity relationships for tensor data with a complex manifold structure. To address this shortcoming, we take advantage of hypergraph learning and propose a novel hypergraph regularized nonnegative triple decomposition for multiway data analysis that employs the hypergraph to model the complex relationships among the raw data. Furthermore, we develop a multiplicative update algorithm to solve our optimization problem and theoretically prove its convergence. Finally, we perform extensive numerical tests on six real-world datasets, and the results show that our proposed algorithm outperforms some state-of-the-art methods.
Collapse
Affiliation(s)
- Qingshui Liao
- Department of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia.
- School of Mathematical Sciences, Guizhou Normal University, Guiyang, 550025, People's Republic of China.
| | - Qilong Liu
- School of Mathematical Sciences, Guizhou Normal University, Guiyang, 550025, People's Republic of China
| | - Fatimah Abdul Razak
- Department of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia.
| |
Collapse
|
5
|
Zeng J, Qiu Y, Ma Y, Wang A, Zhao Q. A Novel Tensor Ring Sparsity Measurement for Image Completion. ENTROPY (BASEL, SWITZERLAND) 2024; 26:105. [PMID: 38392360 PMCID: PMC10887661 DOI: 10.3390/e26020105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/15/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
As a promising data analysis technique, sparse modeling has gained widespread traction in the field of image processing, particularly for image recovery. The matrix rank, served as a measure of data sparsity, quantifies the sparsity within the Kronecker basis representation of a given piece of data in the matrix format. Nevertheless, in practical scenarios, much of the data are intrinsically multi-dimensional, and thus, using a matrix format for data representation will inevitably yield sub-optimal outcomes. Tensor decomposition (TD), as a high-order generalization of matrix decomposition, has been widely used to analyze multi-dimensional data. In a direct generalization to the matrix rank, low-rank tensor modeling has been developed for multi-dimensional data analysis and achieved great success. Despite its efficacy, the connection between TD rank and the sparsity of the tensor data is not direct. In this work, we introduce a novel tensor ring sparsity measurement (TRSM) for measuring the sparsity of the tensor. This metric relies on the tensor ring (TR) Kronecker basis representation of the tensor, providing a unified interpretation akin to matrix sparsity measurements, wherein the Kronecker basis serves as the foundational representation component. Moreover, TRSM can be efficiently computed by the product of the ranks of the mode-2 unfolded TR-cores. To enhance the practical performance of TRSM, the folded-concave penalty of the minimax concave penalty is introduced as a nonconvex relaxation. Lastly, we extend the TRSM to the tensor completion problem and use the alternating direction method of the multipliers scheme to solve it. Experiments on image and video data completion demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yumeng Ma
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
| | - Andong Wang
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| |
Collapse
|
6
|
Yuan Y, Luo X, Shang M, Wang Z. A Kalman-Filter-Incorporated Latent Factor Analysis Model for Temporally Dynamic Sparse Data. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5788-5801. [PMID: 35877802 DOI: 10.1109/tcyb.2022.3185117] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
With the rapid development of services computing in the past decade, Quality-of-Service (QoS)-aware selection of Web services has become a hot yet thorny issue. Conducting warming-up tests on a large set of candidate services for QoS evaluation is time consuming and expensive, making it vital to implement accurate QoS-estimators. Existing QoS-estimators barely consider the temporal patterns hidden in QoS data. However, such data are naturally time dependent. For addressing this critical issue, this study presents a Kalman-filter-incorporated latent factor analysis (KLFA)-based QoS-estimator for accurate representation to temporally dynamic QoS data. Its main idea is to make the user latent features (LFs) time dependent, while the service ones time consistent. A novel iterative training scheme is designed, where the user LFs are learned through a Kalman filter for precisely modeling the temporal patterns, and the service ones are alternatively trained via an alternating least squares algorithm for precisely representing the historical QoS data. Empirical studies on large-scale and real Web service QoS datasets demonstrate that the proposed KLFA model significantly outperforms state-of-the-art QoS-estimators in estimation accuracy for dynamic QoS data.
Collapse
|
7
|
Yu Y, Zhou G, Zheng N, Qiu Y, Xie S, Zhao Q. Graph-Regularized Non-Negative Tensor-Ring Decomposition for Multiway Representation Learning. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3114-3127. [PMID: 35468067 DOI: 10.1109/tcyb.2022.3157133] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Tensor-ring (TR) decomposition is a powerful tool for exploiting the low-rank property of multiway data and has been demonstrated great potential in a variety of important applications. In this article, non-negative TR (NTR) decomposition and graph-regularized NTR (GNTR) decomposition are proposed. The former equips TR decomposition with the ability to learn the parts-based representation by imposing non-negativity on the core tensors, and the latter additionally introduces a graph regularization to the NTR model to capture manifold geometry information from tensor data. Both of the proposed models extend TR decomposition and can be served as powerful representation learning tools for non-negative multiway data. The optimization algorithms based on an accelerated proximal gradient are derived for NTR and GNTR. We also empirically justified that the proposed methods can provide more interpretable and physically meaningful representations. For example, they are able to extract parts-based components with meaningful color and line patterns from objects. Extensive experimental results demonstrated that the proposed methods have better performance than state-of-the-art tensor-based methods in clustering and classification tasks.
Collapse
|
8
|
Qiu Y, Zhou G, Zeng J, Zhao Q, Xie S. Imbalanced low-rank tensor completion via latent matrix factorization. Neural Netw 2022; 155:369-382. [PMID: 36115163 DOI: 10.1016/j.neunet.2022.08.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 07/31/2022] [Accepted: 08/25/2022] [Indexed: 11/24/2022]
Abstract
Tensor completion has been widely used in computer vision and machine learning. Most existing tensor completion methods empirically assume the intrinsic tensor is simultaneous low-rank in all over modes. However, tensor data recorded from real-world applications may conflict with these assumptions, e.g., face images taken from different subjects often lie in a union of low-rank subspaces, which may result in a quite high rank or even full rank structure in its sample mode. To this aim, in this paper, we propose an imbalanced low-rank tensor completion method, which can flexibly estimate the low-rank incomplete tensor via decomposing it into a mixture of multiple latent tensor ring (TR) rank components. Specifically, each latent component is approximated using low-rank matrix factorization based on TR unfolding matrix. In addition, an effective proximal alternating minimization algorithm is developed and theoretically proved to maintain the global convergence property, that is, the whole sequence of iterates is convergent and converges to a critical point. Extensive experiments on both synthetic and real-world tensor data demonstrate that the proposed method achieves more favorable completion results with less computational cost when compared to the state-of-the-art tensor completion methods.
Collapse
Affiliation(s)
- Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing and the School of Automation, Guangzhou 510006, China.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, China.
| | - Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing and the School of Automation, Guangzhou 510006, China.
| | - Qibin Zhao
- Tensor Learning Team, RIKEN Center for Advanced Intelligence Project (AIP), Japan; School of Automation, Guangdong University of Technology, Guangzhou, 510006, China.
| | - Shengli Xie
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing and the School of Automation, Guangzhou 510006, China.
| |
Collapse
|
9
|
Yu Y, Zhou G, Huang H, Xie S, Zhao Q. A semi-supervised label-driven auto-weighted strategy for multi-view data classification. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109694] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
10
|
Qiu Y, Zhou G, Zhao Q, Xie S. Noisy Tensor Completion via Low-Rank Tensor Ring. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; PP:1127-1141. [PMID: 35714084 DOI: 10.1109/tnnls.2022.3181378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations. However, existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries, which is quite restrictive in practice. To remedy such drawback, this article proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations. Specifically, the tensor ring nuclear norm (TRNN) and least-squares estimator are adopted to regularize the underlying tensor and the observed entries, respectively. In addition, a nonasymptotic upper bound of estimation error is provided to depict the statistical performance of the proposed estimator. Two efficient algorithms are developed to solve the optimization problem with convergence guarantee, one of which is specially tailored to handle large-scale tensors by replacing the minimization of TRNN of the original tensor equivalently with that of a much smaller one in a heterogeneous tensor decomposition framework. Experimental results on both synthetic and real-world data demonstrate the effectiveness and efficiency of the proposed model in recovering noisy incomplete tensor data compared with state-of-the-art tensor completion models.
Collapse
|
11
|
Incremental Nonnegative Tucker Decomposition with Block-Coordinate Descent and Recursive Approaches. Symmetry (Basel) 2022. [DOI: 10.3390/sym14010113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Nonnegative Tucker decomposition (NTD) is a robust method used for nonnegative multilinear feature extraction from nonnegative multi-way arrays. The standard version of NTD assumes that all of the observed data are accessible for batch processing. However, the data in many real-world applications are not static or are represented by a large number of multi-way samples that cannot be processing in one batch. To tackle this problem, a dynamic approach to NTD can be explored. In this study, we extend the standard model of NTD to an incremental or online version, assuming volatility of observed multi-way data along one mode. We propose two computational approaches for updating the factors in the incremental model: one is based on the recursive update model, and the other uses the concept of the block Kaczmarz method that belongs to coordinate descent methods. The experimental results performed on various datasets and streaming data demonstrate high efficiently of both algorithmic approaches, with respect to the baseline NTD methods.
Collapse
|
12
|
Huang H, Ma Z, Zhang G. Dimensionality reduction of tensors based on manifold-regularized tucker decomposition and its iterative solution. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01422-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Huang H, Ma Z, Zhang G, Wu H. Dimensionality reduction based on multi-local linear regression and global subspace projection distance minimum. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-01022-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Huang Z, Qiu Y, Sun W. Recognition of motor imagery EEG patterns based on common feature analysis. BRAIN-COMPUTER INTERFACES 2020. [DOI: 10.1080/2326263x.2020.1783170] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Zhenhao Huang
- School of Automation, Guangdong University of Technology, Guangzhou, China
- Guangdong-Hong Kong-Macao Joint Laboratory for Smart Manufacturing, Guangzhou, China
| | - Yichun Qiu
- School of Automation, Guangdong University of Technology, Guangzhou, China
- Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou, China
| | - Weijun Sun
- School of Automation, Guangdong University of Technology, Guangzhou, China
- Guangdong Key Laboratory of IoT Information Technology, Guangzhou, China
| |
Collapse
|