1
|
Qiu Y, Zhou G, Li C, Mandic D, Zhao Q. Tensor ring rank determination using odd-dimensional unfolding. Neural Netw 2025; 183:106947. [PMID: 39637827 DOI: 10.1016/j.neunet.2024.106947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 09/24/2024] [Accepted: 11/19/2024] [Indexed: 12/07/2024]
Abstract
While tensor ring (TR) decomposition methods have been extensively studied, the determination of TR-ranks remains a challenging problem, with existing methods being typically sensitive to the determination of the starting rank (i.e., the first rank to be optimized). Moreover, current methods often fail to adaptively determine TR-ranks in the presence of noisy and incomplete data, and exhibit computational inefficiencies when handling high-dimensional data. To address these issues, we propose an odd-dimensional unfolding method for the effective determination of TR-ranks. This is achieved by leveraging the symmetry of the TR model and the bound rank relationship in TR decomposition. In addition, we employ the singular value thresholding algorithm to facilitate the adaptive determination of TR-ranks and use randomized sketching techniques to enhance the efficiency and scalability of the method. Extensive experimental results in rank identification, data denoising, and completion demonstrate the potential of our method for a broad range of applications.
Collapse
Affiliation(s)
- Yichun Qiu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China.
| | - Chao Li
- Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan; Faculty of Health Data Science, Juntendo University, Tokyo 113-8421, Japan.
| | - Danilo Mandic
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2BT, United Kingdom.
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China; Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo 103-0027, Japan.
| |
Collapse
|
2
|
Qiu Y, Zhou G, Wang A, Zhao Q, Xie S. Balanced Unfolding Induced Tensor Nuclear Norms for High-Order Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:4724-4737. [PMID: 38656849 DOI: 10.1109/tnnls.2024.3373384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
The recently proposed tensor tubal rank has been witnessed to obtain extraordinary success in real-world tensor data completion. However, existing works usually fix the transform orientation along the third mode and may fail to turn multidimensional low-tubal-rank structure into account. To alleviate these bottlenecks, we introduce two unfolding induced tensor nuclear norms (TNNs) for the tensor completion (TC) problem, which naturally extends tensor tubal rank to high-order data. Specifically, we show how multidimensional low-tubal-rank structure can be captured by utilizing a novel balanced unfolding strategy, upon which two TNNs, namely, overlapped TNN (OTNN) and latent TNN (LTNN), are developed. We also show the immediate relationship between the tubal rank of unfolding tensor and the existing tensor network (TN) rank, e.g., CANDECOMP/PARAFAC (CP) rank, Tucker rank, and tensor ring (TR) rank, to demonstrate its efficiency and practicality. Two efficient TC models are then proposed with theoretical guarantees by analyzing a unified nonasymptotic upper bound. To solve optimization problems, we develop two alternating direction methods of multipliers (ADMM) based algorithms. The proposed models have been demonstrated to exhibit superior performance based on experimental findings involving synthetic and real-world tensors, including facial images, light field images, and video sequences.
Collapse
|
3
|
Lin CH, Liu Y, Chi CY, Hsu CC, Ren H, Quek TQS. Hyperspectral Tensor Completion Using Low-Rank Modeling and Convex Functional Analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10736-10750. [PMID: 37027554 DOI: 10.1109/tnnls.2023.3243808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Hyperspectral tensor completion (HTC) for remote sensing, critical for advancing space exploration and other satellite imaging technologies, has drawn considerable attention from recent machine learning community. Hyperspectral image (HSI) contains a wide range of narrowly spaced spectral bands hence forming unique electrical magnetic signatures for distinct materials, and thus plays an irreplaceable role in remote material identification. Nevertheless, remotely acquired HSIs are of low data purity and quite often incompletely observed or corrupted during transmission. Therefore, completing the 3-D hyperspectral tensor, involving two spatial dimensions and one spectral dimension, is a crucial signal processing task for facilitating the subsequent applications. Benchmark HTC methods rely on either supervised learning or nonconvex optimization. As reported in recent machine learning literature, John ellipsoid (JE) in functional analysis is a fundamental topology for effective hyperspectral analysis. We therefore attempt to adopt this key topology in this work, but this induces a dilemma that the computation of JE requires the complete information of the entire HSI tensor that is, however, unavailable under the HTC problem setting. We resolve the dilemma, decouple HTC into convex subproblems ensuring computational efficiency, and show state-of-the-art HTC performances of our algorithm. We also demonstrate that our method has improved the subsequent land cover classification accuracy on the recovered hyperspectral tensor.
Collapse
|
4
|
Huang H, Zhou G, Zhao Q, He L, Xie S. Comprehensive Multiview Representation Learning via Deep Autoencoder-Like Nonnegative Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5953-5967. [PMID: 37672378 DOI: 10.1109/tnnls.2023.3304626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
Learning a comprehensive representation from multiview data is crucial in many real-world applications. Multiview representation learning (MRL) based on nonnegative matrix factorization (NMF) has been widely adopted by projecting high-dimensional space into a lower order dimensional space with great interpretability. However, most prior NMF-based MRL techniques are shallow models that ignore hierarchical information. Although deep matrix factorization (DMF)-based methods have been proposed recently, most of them only focus on the consistency of multiple views and have cumbersome clustering steps. To address the above issues, in this article, we propose a novel model termed deep autoencoder-like NMF for MRL (DANMF-MRL), which obtains the representation matrix through the deep encoding stage and decodes it back to the original data. In this way, through a DANMF-based framework, we can simultaneously consider the multiview consistency and complementarity, allowing for a more comprehensive representation. We further propose a one-step DANMF-MRL, which learns the latent representation and final clustering labels matrix in a unified framework. In this approach, the two steps can negotiate with each other to fully exploit the latent clustering structure, avoid previous tedious clustering steps, and achieve optimal clustering performance. Furthermore, two efficient iterative optimization algorithms are developed to solve the proposed models both with theoretical convergence analysis. Extensive experiments on five benchmark datasets demonstrate the superiority of our approaches against other state-of-the-art MRL methods.
Collapse
|
5
|
Zeng J, Qiu Y, Ma Y, Wang A, Zhao Q. A Novel Tensor Ring Sparsity Measurement for Image Completion. ENTROPY (BASEL, SWITZERLAND) 2024; 26:105. [PMID: 38392360 PMCID: PMC10887661 DOI: 10.3390/e26020105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/15/2024] [Accepted: 01/22/2024] [Indexed: 02/24/2024]
Abstract
As a promising data analysis technique, sparse modeling has gained widespread traction in the field of image processing, particularly for image recovery. The matrix rank, served as a measure of data sparsity, quantifies the sparsity within the Kronecker basis representation of a given piece of data in the matrix format. Nevertheless, in practical scenarios, much of the data are intrinsically multi-dimensional, and thus, using a matrix format for data representation will inevitably yield sub-optimal outcomes. Tensor decomposition (TD), as a high-order generalization of matrix decomposition, has been widely used to analyze multi-dimensional data. In a direct generalization to the matrix rank, low-rank tensor modeling has been developed for multi-dimensional data analysis and achieved great success. Despite its efficacy, the connection between TD rank and the sparsity of the tensor data is not direct. In this work, we introduce a novel tensor ring sparsity measurement (TRSM) for measuring the sparsity of the tensor. This metric relies on the tensor ring (TR) Kronecker basis representation of the tensor, providing a unified interpretation akin to matrix sparsity measurements, wherein the Kronecker basis serves as the foundational representation component. Moreover, TRSM can be efficiently computed by the product of the ranks of the mode-2 unfolded TR-cores. To enhance the practical performance of TRSM, the folded-concave penalty of the minimax concave penalty is introduced as a nonconvex relaxation. Lastly, we extend the TRSM to the tensor completion problem and use the alternating direction method of the multipliers scheme to solve it. Experiments on image and video data completion demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Junhua Zeng
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yuning Qiu
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Yumeng Ma
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
| | - Andong Wang
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| | - Qibin Zhao
- School of Automation, Guangdong University of Technology, Guangzhou 510006, China
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo 103-0027, Japan
| |
Collapse
|
6
|
Yu J, Zhou G, Sun W, Xie S. Robust to Rank Selection: Low-Rank Sparse Tensor-Ring Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2451-2465. [PMID: 34478384 DOI: 10.1109/tnnls.2021.3106654] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Tensor-ring (TR) decomposition was recently studied and applied for low-rank tensor completion due to its powerful representation ability of high-order tensors. However, most of the existing TR-based methods tend to suffer from deterioration when the selected rank is larger than the true one. To address this issue, this article proposes a new low-rank sparse TR completion method by imposing the Frobenius norm regularization on its latent space. Specifically, we theoretically establish that the proposed method is capable of exploiting the low rankness and Kronecker-basis-representation (KBR)-based sparsity of the target tensor using the Frobenius norm of latent TR-cores. We optimize the proposed TR completion by block coordinate descent (BCD) algorithm and design a modified TR decomposition for the initialization of this algorithm. Extensive experimental results on synthetic data and visual data have demonstrated that the proposed method is able to achieve better results compared to the conventional TR-based completion methods and other state-of-the-art methods and, meanwhile, is quite robust even if the selected TR-rank increases.
Collapse
|
7
|
A general multi-factor norm based low-rank tensor completion framework. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04477-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
8
|
Jin D, Yang M, Qin Z, Peng J, Ying S. A Weighting Method for Feature Dimension by Semisupervised Learning With Entropy. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:1218-1227. [PMID: 34546928 DOI: 10.1109/tnnls.2021.3105127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a semisupervised weighting method for feature dimension based on entropy is proposed for classification, dimension reduction, and correlation analysis. For real-world data, different feature dimensions usually show different importance. Generally, data in the same class are supposed to be similar, so their entropy should be small; and those in different classes are supposed to be dissimilar, so their entropy should be large. According to this, we propose a way to construct the weights of feature dimensions with the whole entropy and the innerclass entropies. The weights indicate the contribution of their corresponding feature dimensions in classification. They can be used to improve the performance of classification by giving a weighted distance metric and can be applied to dimension reduction and correlation analysis as well. Some numerical experiments are given to test the proposed method by comparing it with some other representative methods. They demonstrate that the proposed method is feasible and efficient in classification, dimension reduction, and correlation analysis.
Collapse
|
9
|
Jiang TX, Zhao XL, Zhang H, Ng MK. Dictionary Learning With Low-Rank Coding Coefficients for Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:932-946. [PMID: 34464263 DOI: 10.1109/tnnls.2021.3104837] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, we propose a novel tensor learning and coding model for third-order data completion. The aim of our model is to learn a data-adaptive dictionary from given observations and determine the coding coefficients of third-order tensor tubes. In the completion process, we minimize the low-rankness of each tensor slice containing the coding coefficients. By comparison with the traditional predefined transform basis, the advantages of the proposed model are that: 1) the dictionary can be learned based on the given data observations so that the basis can be more adaptively and accurately constructed and 2) the low-rankness of the coding coefficients can allow the linear combination of dictionary features more effectively. Also we develop a multiblock proximal alternating minimization algorithm for solving such tensor learning and coding model and show that the sequence generated by the algorithm can globally converge to a critical point. Extensive experimental results for real datasets such as videos, hyperspectral images, and traffic data are reported to demonstrate these advantages and show that the performance of the proposed tensor learning and coding method is significantly better than the other tensor completion methods in terms of several evaluation metrics.
Collapse
|
10
|
Low tensor-ring rank completion: parallel matrix factorization with smoothness on latent space. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08023-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
11
|
Su L, Liu J, Tian X, Huang K, Tan S. Iterative tensor eigen rank minimization for low-rank tensor completion. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
Qiu Y, Zhou G, Zhao Q, Xie S. Noisy Tensor Completion via Low-Rank Tensor Ring. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; PP:1127-1141. [PMID: 35714084 DOI: 10.1109/tnnls.2022.3181378] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tensor completion is a fundamental tool for incomplete data analysis, where the goal is to predict missing entries from partial observations. However, existing methods often make the explicit or implicit assumption that the observed entries are noise-free to provide a theoretical guarantee of exact recovery of missing entries, which is quite restrictive in practice. To remedy such drawback, this article proposes a novel noisy tensor completion model, which complements the incompetence of existing works in handling the degeneration of high-order and noisy observations. Specifically, the tensor ring nuclear norm (TRNN) and least-squares estimator are adopted to regularize the underlying tensor and the observed entries, respectively. In addition, a nonasymptotic upper bound of estimation error is provided to depict the statistical performance of the proposed estimator. Two efficient algorithms are developed to solve the optimization problem with convergence guarantee, one of which is specially tailored to handle large-scale tensors by replacing the minimization of TRNN of the original tensor equivalently with that of a much smaller one in a heterogeneous tensor decomposition framework. Experimental results on both synthetic and real-world data demonstrate the effectiveness and efficiency of the proposed model in recovering noisy incomplete tensor data compared with state-of-the-art tensor completion models.
Collapse
|
13
|
Fast hypergraph regularized nonnegative tensor ring decomposition based on low-rank approximation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03346-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
14
|
Qin W, Wang H, Zhang F, Wang J, Luo X, Huang T. Low-Rank High-Order Tensor Completion With Applications in Visual Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2433-2448. [PMID: 35259105 DOI: 10.1109/tip.2022.3155949] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recently, tensor Singular Value Decomposition (t-SVD)-based low-rank tensor completion (LRTC) has achieved unprecedented success in addressing various pattern analysis issues. However, existing studies mostly focus on third-order tensors while order- d ( d ≥ 4 ) tensors are commonly encountered in real-world applications, like fourth-order color videos, fourth-order hyper-spectral videos, fifth-order light-field images, and sixth-order bidirectional texture functions. Aiming at addressing this critical issue, this paper establishes an order- d tensor recovery framework including the model, algorithm and theories by innovatively developing a novel algebraic foundation for order- d t-SVD, thereby achieving exact completion for any order- d low t-SVD rank tensors with missing values with an overwhelming probability. Emperical studies on synthetic data and real-world visual data illustrate that compared with other state-of-the-art recovery frameworks, the proposed one achieves highly competitive performance in terms of both qualitative and quantitative metrics. In particular, as the observed data density becomes low, i.e., about 10%, the proposed recovery framework is still significantly better than its peers. The code of our algorithm is released at https://github.com/Qinwenjinswu/TIP-Code.
Collapse
|