1
|
Qiu Y, Zhou G, Li C, Mandic D, Zhao Q. Tensor ring rank determination using odd-dimensional unfolding. Neural Netw 2025; 183:106947. [PMID: 39637827 DOI: 10.1016/j.neunet.2024.106947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 09/24/2024] [Accepted: 11/19/2024] [Indexed: 12/07/2024]
Abstract
While tensor ring (TR) decomposition methods have been extensively studied, the determination of TR-ranks remains a challenging problem, with existing methods being typically sensitive to the determination of the starting rank (i.e., the first rank to be optimized). Moreover, current methods often fail to adaptively determine TR-ranks in the presence of noisy and incomplete data, and exhibit computational inefficiencies when handling high-dimensional data. To address these issues, we propose an odd-dimensional unfolding method for the effective determination of TR-ranks. This is achieved by leveraging the symmetry of the TR model and the bound rank relationship in TR decomposition. In addition, we employ the singular value thresholding algorithm to facilitate the adaptive determination of TR-ranks and use randomized sketching techniques to enhance the efficiency and scalability of the method. Extensive experimental results in rank identification, data denoising, and completion demonstrate the potential of our method for a broad range of applications.
Collapse
Affiliation(s)
- Yichun Qiu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China.
| | - Chao Li
- Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan; Faculty of Health Data Science, Juntendo University, Tokyo 113-8421, Japan.
| | - Danilo Mandic
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2BT, United Kingdom.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| |
Collapse
|
2
|
Qiu Y, Zhou G, Wang A, Zhao Q, Xie S. Balanced Unfolding Induced Tensor Nuclear Norms for High-Order Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:4724-4737. [PMID: 38656849 DOI: 10.1109/tnnls.2024.3373384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
The recently proposed tensor tubal rank has been witnessed to obtain extraordinary success in real-world tensor data completion. However, existing works usually fix the transform orientation along the third mode and may fail to turn multidimensional low-tubal-rank structure into account. To alleviate these bottlenecks, we introduce two unfolding induced tensor nuclear norms (TNNs) for the tensor completion (TC) problem, which naturally extends tensor tubal rank to high-order data. Specifically, we show how multidimensional low-tubal-rank structure can be captured by utilizing a novel balanced unfolding strategy, upon which two TNNs, namely, overlapped TNN (OTNN) and latent TNN (LTNN), are developed. We also show the immediate relationship between the tubal rank of unfolding tensor and the existing tensor network (TN) rank, e.g., CANDECOMP/PARAFAC (CP) rank, Tucker rank, and tensor ring (TR) rank, to demonstrate its efficiency and practicality. Two efficient TC models are then proposed with theoretical guarantees by analyzing a unified nonasymptotic upper bound. To solve optimization problems, we develop two alternating direction methods of multipliers (ADMM) based algorithms. The proposed models have been demonstrated to exhibit superior performance based on experimental findings involving synthetic and real-world tensors, including facial images, light field images, and video sequences.
Collapse
|
3
|
Zheng Y, Zhou G, Huang H, Luo X, Huang Z, Zhao Q. Unifying complete and incomplete multi-view clustering through an information-theoretic generative model. Neural Netw 2025; 182:106901. [PMID: 39608146 DOI: 10.1016/j.neunet.2024.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 10/25/2024] [Accepted: 11/07/2024] [Indexed: 11/30/2024]
Abstract
Recently, Incomplete Multi-View Clustering (IMVC) has become a rapidly growing research topic, driven by the prevalent issue of incomplete data in real-world applications. Although many approaches have been proposed to address this challenge, most methods did not provide a clear explanation of the learning process for recovery. Moreover, most of them only considered the inter-view relationships, without taking into account the relationships between samples. The influence of irrelevant information is usually ignored, which has prevented them from achieving optimal performance. To tackle the aforementioned issues, we aim at unifying compLete and incOmplete multi-view clusterinG through an Information-theoretiC generative model (LOGIC). Specifically, we have defined three principles based on information theory: comprehensiveness, consensus, and compressibility. We first explain that the essence of learning to recover missing views is to maximize the mutual information between the common representation and the data from each view. Secondly, we leverage the consensus principle to maximize the mutual information between view distributions to uncover the associations between different samples. Finally, guided by the principle of compressibility, we remove as much task-irrelevant information as possible to ensure that the common representation effectively extracts semantic information. Furthermore, it can serve as a plug-and-play missing-data recovery module for multi-view clustering models. Through extensive empirical studies, we have demonstrated the effectiveness of our approach in generating missing views. In clustering tasks, our method consistently outperforms state-of-the-art (SOTA) techniques in terms of accuracy, normalized mutual information and purity, showcasing its superiority in both recovery and clustering performance.
Collapse
Affiliation(s)
- Yanghang Zheng
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou, 510006, China.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Ministry of Education, Guangzhou, 510006, China.
| | - Haonan Huang
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; RIKEN AIP, Tokyo, Japan.
| | - Xintao Luo
- Guangdong Key Laboratory of IoT Information Technology, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong-HongKong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou, 510006, China.
| | - Zhenhao Huang
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; 111 Center for Intelligent Batch Manufacturing Based on IoT Technology, Guangzhou, 510006, China.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; RIKEN AIP, Tokyo, Japan.
| |
Collapse
|
4
|
Wu Y, Jin Y. Efficient enhancement of low-rank tensor completion via thin QR decomposition. Front Big Data 2024; 7:1382144. [PMID: 39015435 PMCID: PMC11250652 DOI: 10.3389/fdata.2024.1382144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 04/30/2024] [Indexed: 07/18/2024] Open
Abstract
Low-rank tensor completion (LRTC), which aims to complete missing entries from tensors with partially observed terms by utilizing the low-rank structure of tensors, has been widely used in various real-world issues. The core tensor nuclear norm minimization (CTNM) method based on Tucker decomposition is one of common LRTC methods. However, the CTNM methods based on Tucker decomposition often have a large computing cost due to the fact that the general factor matrix solving technique involves multiple singular value decompositions (SVDs) in each loop. To address this problem, this article enhances the method and proposes an effective CTNM method based on thin QR decomposition (CTNM-QR) with lower computing complexity. The proposed method extends the CTNM by introducing tensor versions of the auxiliary variables instead of matrices, while using the thin QR decomposition to solve the factor matrix rather than the SVD, which can save the computational complexity and improve the tensor completion accuracy. In addition, the CTNM-QR method's convergence and complexity are analyzed further. Numerous experiments in synthetic data, real color images, and brain MRI data at different missing rates demonstrate that the proposed method not only outperforms in terms of completion accuracy and visualization, but also conducts more efficiently than most state-of-the-art LRTC methods.
Collapse
Affiliation(s)
| | - Yunzhi Jin
- Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming, China
| |
Collapse
|
5
|
Zhang D, Huang H, Zhao Q, Zhou G. Generalized latent multi-view clustering with tensorized bipartite graph. Neural Netw 2024; 175:106282. [PMID: 38599137 DOI: 10.1016/j.neunet.2024.106282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Revised: 03/22/2024] [Accepted: 03/27/2024] [Indexed: 04/12/2024]
Abstract
Tensor-based multi-view spectral clustering algorithms use tensors to model the structure of multi-dimensional data to take advantage of the complementary information and high-order correlations embedded in the graph, thus achieving impressive clustering performance. However, these algorithms use linear models to obtain consensus, which prevents the learned consensus from adequately representing the nonlinear structure of complex data. In order to address this issue, we propose a method called Generalized Latent Multi-View Clustering with Tensorized Bipartite Graph (GLMC-TBG). Specifically, in this paper we introduce neural networks to learn highly nonlinear mappings that encode nonlinear structures in graphs into latent representations. In addition, multiple views share the same latent consensus through nonlinear interactions. In this way, a more comprehensive common representation from multiple views can be achieved. An Augmented Lagrangian Multiplier with Alternating Direction Minimization (ALM-ADM) framework is designed to optimize the model. Experiments on seven real-world data sets verify that the proposed algorithm is superior to state-of-the-art algorithms.
Collapse
Affiliation(s)
- Dongping Zhang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Guangdong Key Laboratory of IoT Information Technology, Guangdong University of Technology, Guangzhou 510006, China.
| | - Haonan Huang
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou 510006, China; Guangdong-HongKong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, China.
| |
Collapse
|
6
|
Zeng J, Zhou G, Qiu Y, Li C, Zhao Q. Bayesian tensor network structure search and its application to tensor completion. Neural Netw 2024; 175:106290. [PMID: 38626616 DOI: 10.1016/j.neunet.2024.106290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 03/05/2024] [Accepted: 04/02/2024] [Indexed: 04/18/2024]
Abstract
Tensor network (TN) has demonstrated remarkable efficacy in the compact representation of high-order data. In contrast to the TN methods with pre-determined structures, the recently introduced tensor network structure search (TNSS) methods automatically learn a compact TN structure from the data, gaining increasing attention. Nonetheless, TNSS requires time-consuming manual adjustments of the penalty parameters that control the model complexity to achieve better performance, especially in the presence of missing or noisy data. To provide an effective solution to this problem, in this paper, we propose a parameters tuning-free TNSS algorithm based on Bayesian modeling, aiming at conducting TNSS in a fully data-driven manner. Specifically, the uncertainty in the data corruption is well-incorporated in the prior setting of the probabilistic model. For TN structure determination, we reframe it as a rank learning problem of the fully-connected tensor network (FCTN), integrating the generalized inverse Gaussian (GIG) distribution for low-rank promotion. To eliminate the need for hyperparameter tuning, we adopt a fully Bayesian approach and propose an efficient Markov chain Monte Carlo (MCMC) algorithm for posterior distribution sampling. Compared with the previous TNSS method, experiment results demonstrate the proposed algorithm can effectively and efficiently find the latent TN structures of the data under various missing and noise conditions and achieves the best recovery results. Furthermore, our method exhibits superior performance in tensor completion with real-world data compared to other state-of-the-art tensor-decomposition-based completion methods.
Collapse
Affiliation(s)
- Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan; Key Laboratory of Intelligent Information Processing and System Integration of IoT, Ministry of Education, Guangzhou, 510006, China.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou, 510006, China.
| | - Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| | - Chao Li
- Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo, 103-0027, Japan.
| |
Collapse
|
7
|
Huang H, Zhou G, Zhao Q, He L, Xie S. Comprehensive Multiview Representation Learning via Deep Autoencoder-Like Nonnegative Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5953-5967. [PMID: 37672378 DOI: 10.1109/tnnls.2023.3304626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Learning a comprehensive representation from multiview data is crucial in many real-world applications. Multiview representation learning (MRL) based on nonnegative matrix factorization (NMF) has been widely adopted by projecting high-dimensional space into a lower order dimensional space with great interpretability. However, most prior NMF-based MRL techniques are shallow models that ignore hierarchical information. Although deep matrix factorization (DMF)-based methods have been proposed recently, most of them only focus on the consistency of multiple views and have cumbersome clustering steps. To address the above issues, in this article, we propose a novel model termed deep autoencoder-like NMF for MRL (DANMF-MRL), which obtains the representation matrix through the deep encoding stage and decodes it back to the original data. In this way, through a DANMF-based framework, we can simultaneously consider the multiview consistency and complementarity, allowing for a more comprehensive representation. We further propose a one-step DANMF-MRL, which learns the latent representation and final clustering labels matrix in a unified framework. In this approach, the two steps can negotiate with each other to fully exploit the latent clustering structure, avoid previous tedious clustering steps, and achieve optimal clustering performance. Furthermore, two efficient iterative optimization algorithms are developed to solve the proposed models both with theoretical convergence analysis. Extensive experiments on five benchmark datasets demonstrate the superiority of our approaches against other state-of-the-art MRL methods.
Collapse
|
8
|
Qin W, Wang H, Zhang F, Ma W, Wang J, Huang T. Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:2835-2850. [PMID: 38598373 DOI: 10.1109/tip.2024.3385284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-d ( d ≥ 3 ) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
Collapse
|
9
|
Zeng J, Qiu Y, Ma Y, Wang A, Zhao Q. A Novel Tensor Ring Sparsity Measurement for Image Completion. ENTROPY (BASEL, SWITZERLAND) 2024; 26:105. [PMID: 38392360 PMCID: PMC10887661 DOI: 10.3390/e26020105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/15/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
As a promising data analysis technique, sparse modeling has gained widespread traction in the field of image processing, particularly for image recovery. The matrix rank, served as a measure of data sparsity, quantifies the sparsity within the Kronecker basis representation of a given piece of data in the matrix format. Nevertheless, in practical scenarios, much of the data are intrinsically multi-dimensional, and thus, using a matrix format for data representation will inevitably yield sub-optimal outcomes. Tensor decomposition (TD), as a high-order generalization of matrix decomposition, has been widely used to analyze multi-dimensional data. In a direct generalization to the matrix rank, low-rank tensor modeling has been developed for multi-dimensional data analysis and achieved great success. Despite its efficacy, the connection between TD rank and the sparsity of the tensor data is not direct. In this work, we introduce a novel tensor ring sparsity measurement (TRSM) for measuring the sparsity of the tensor. This metric relies on the tensor ring (TR) Kronecker basis representation of the tensor, providing a unified interpretation akin to matrix sparsity measurements, wherein the Kronecker basis serves as the foundational representation component. Moreover, TRSM can be efficiently computed by the product of the ranks of the mode-2 unfolded TR-cores. To enhance the practical performance of TRSM, the folded-concave penalty of the minimax concave penalty is introduced as a nonconvex relaxation. Lastly, we extend the TRSM to the tensor completion problem and use the alternating direction method of the multipliers scheme to solve it. Experiments on image and video data completion demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yumeng Ma
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
| | - Andong Wang
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| |
Collapse
|
10
|
Shang R, Chi H, Li Y, Jiao L. Adaptive graph regularization and self-expression for noise-aware feature selection. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.03.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
|