1
|
He Y, Atia GK. Robust Low-Tubal-Rank Tensor Completion Based on Tensor Factorization and Maximum Correntopy Criterion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14603-14617. [PMID: 37279124 DOI: 10.1109/tnnls.2023.3280086] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The goal of tensor completion is to recover a tensor from a subset of its entries, often by exploiting its low-rank property. Among several useful definitions of tensor rank, the low tubal rank was shown to give a valuable characterization of the inherent low-rank structure of a tensor. While some low-tubal-rank tensor completion algorithms with favorable performance have been recently proposed, these algorithms utilize second-order statistics to measure the error residual, which may not work well when the observed entries contain large outliers. In this article, we propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers. To efficiently optimize the proposed objective, we leverage a half-quadratic minimization technique whereby the optimization is transformed to a weighted low-tubal-rank tensor factorization problem. Subsequently, we propose two simple and efficient algorithms to obtain the solution and provide their convergence and complexity analysis. Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
Collapse
|
2
|
Zeng H, Huang S, Chen Y, Liu S, Luong HQ, Philips W. Tensor Completion Using Bilayer Multimode Low-Rank Prior and Total Variation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:13297-13311. [PMID: 37195853 DOI: 10.1109/tnnls.2023.3266841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
In this article, we propose a novel bilayer low-rankness measure and two models based on it to recover a low-rank (LR) tensor. The global low rankness of underlying tensor is first encoded by LR matrix factorizations (MFs) to the all-mode matricizations, which can exploit multiorientational spectral low rankness. Presumably, the factor matrices of all-mode decomposition are LR, since local low-rankness property exists in within-mode correlation. In the decomposed subspace, to describe the refined local LR structures of factor/subspace, a new low-rankness insight of subspace: a double nuclear norm scheme is designed to explore the so-called second-layer low rankness. By simultaneously representing the bilayer low rankness of the all modes of the underlying tensor, the proposed methods aim to model multiorientational correlations for arbitrary N -way ( N ≥ 3 ) tensors. A block successive upper-bound minimization (BSUM) algorithm is designed to solve the optimization problem. Subsequence convergence of our algorithms can be established, and the iterates generated by our algorithms converge to the coordinatewise minimizers in some mild conditions. Experiments on several types of public datasets show that our algorithm can recover a variety of LR tensors from significantly fewer samples than its counterparts.
Collapse
|
3
|
Luo Q, Li W, Xiao M. Bayesian Dictionary Learning on Robust Tubal Transformed Tensor Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11091-11105. [PMID: 37028082 DOI: 10.1109/tnnls.2023.3248156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The recent study on tensor singular value decomposition (t-SVD) that performs the Fourier transform on the tubes of a third-order tensor has gained promising performance on multidimensional data recovery problems. However, such a fixed transformation, e.g., discrete Fourier transform and discrete cosine transform, lacks being self-adapted to the change of different datasets, and thus, it is not flexible enough to exploit the low-rank and sparse property of the variety of multidimensional datasets. In this article, we consider a tube as an atom of a third-order tensor and construct a data-driven learning dictionary from the observed noisy data along the tubes of the given tensor. Then, a Bayesian dictionary learning (DL) model with tensor tubal transformed factorization, aiming to identify the underlying low-tubal-rank structure of the tensor effectively via the data-adaptive dictionary, is developed to solve the tensor robust principal component analysis (TRPCA) problem. With the defined pagewise tensor operators, a variational Bayesian DL algorithm is established and updates the posterior distributions instantaneously along the third dimension to solve the TPRCA. Extensive experiments on real-world applications, such as color image and hyperspectral image denoising and background/foreground separation problems, demonstrate both effectiveness and efficiency of the proposed approach in terms of various standard metrics.
Collapse
|
4
|
Wang S, Nie F, Wang Z, Wang R, Li X. Robust Principal Component Analysis via Joint Reconstruction and Projection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7175-7189. [PMID: 36367910 DOI: 10.1109/tnnls.2022.3214307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Principal component analysis (PCA) is one of the most widely used unsupervised dimensionality reduction algorithms, but it is very sensitive to outliers because the squared l2 -norm is used as distance metric. Recently, many scholars have devoted themselves to solving this difficulty. They learn the projection matrix from minimum reconstruction error or maximum projection variance as the starting point, which leads them to ignore a serious problem, that is, the original PCA learns the projection matrix by minimizing the reconstruction error and maximizing the projection variance simultaneously, but they only consider one of them, which imposes various limitations on the performance of model. To solve this problem, we propose a novel robust principal component analysis via joint reconstruction and projection, namely, RPCA-RP, which combines reconstruction error and projection variance to fully mine the potential information of data. Furthermore, we carefully design a discrete weight for model to implicitly distinguish between normal data and outliers, so as to easily remove outliers and improve the robustness of method. In addition, we also unexpectedly discovered that our method has anomaly detection capabilities. Subsequently, an effective iterative algorithm is explored to solve this problem and perform related theoretical analysis. Extensive experimental results on several real-world datasets and RGB large-scale dataset demonstrate the superiority of our method.
Collapse
|
5
|
Xie M, Liu X, Yang X, Cai W. Multichannel Image Completion With Mixture Noise: Adaptive Sparse Low-Rank Tensor Subspace Meets Nonlocal Self-Similarity. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7521-7534. [PMID: 35580099 DOI: 10.1109/tcyb.2022.3169800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multichannel image completion with mixture noise is a common but complex problem in the fields of machine learning, image processing, and computer vision. Most existing algorithms devote to explore global low-rank information and fail to optimize local and joint-mode structures, which may lead to oversmooth restoration results or lower quality restoration details. In this study, we propose a novel model to deal with multichannel image completion with mixture noise based on adaptive sparse low-rank tensor subspace and nonlocal self-similarity (ASLTS-NS). In the proposed model, a nonlocal similar patch matching framework cooperating with Tucker decomposition is used to explore information of global and joint modes and optimize the local structure for improving restoration quality. In order to enhance the robustness of low-rank decomposition to data missing and mixture noise, we present an adaptive sparse low-rank regularization to construct robust tensor subspace for self-weighing importance of different modes and capturing a stable inherent structure. In addition, joint tensor Frobenius and l1 regularizations are exploited to control two different types of noise. Based on alternating directions method of multipliers (ADMM), a convergent learning algorithm is designed to solve this model. Experimental results on three different types of multichannel image sets demonstrate the advantages of ASLTS-NS under five complex scenarios.
Collapse
|
6
|
Yu J, Zhou G, Sun W, Xie S. Robust to Rank Selection: Low-Rank Sparse Tensor-Ring Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2451-2465. [PMID: 34478384 DOI: 10.1109/tnnls.2021.3106654] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Tensor-ring (TR) decomposition was recently studied and applied for low-rank tensor completion due to its powerful representation ability of high-order tensors. However, most of the existing TR-based methods tend to suffer from deterioration when the selected rank is larger than the true one. To address this issue, this article proposes a new low-rank sparse TR completion method by imposing the Frobenius norm regularization on its latent space. Specifically, we theoretically establish that the proposed method is capable of exploiting the low rankness and Kronecker-basis-representation (KBR)-based sparsity of the target tensor using the Frobenius norm of latent TR-cores. We optimize the proposed TR completion by block coordinate descent (BCD) algorithm and design a modified TR decomposition for the initialization of this algorithm. Extensive experimental results on synthetic data and visual data have demonstrated that the proposed method is able to achieve better results compared to the conventional TR-based completion methods and other state-of-the-art methods and, meanwhile, is quite robust even if the selected TR-rank increases.
Collapse
|
7
|
Jiang TX, Zhao XL, Zhang H, Ng MK. Dictionary Learning With Low-Rank Coding Coefficients for Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:932-946. [PMID: 34464263 DOI: 10.1109/tnnls.2021.3104837] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, we propose a novel tensor learning and coding model for third-order data completion. The aim of our model is to learn a data-adaptive dictionary from given observations and determine the coding coefficients of third-order tensor tubes. In the completion process, we minimize the low-rankness of each tensor slice containing the coding coefficients. By comparison with the traditional predefined transform basis, the advantages of the proposed model are that: 1) the dictionary can be learned based on the given data observations so that the basis can be more adaptively and accurately constructed and 2) the low-rankness of the coding coefficients can allow the linear combination of dictionary features more effectively. Also we develop a multiblock proximal alternating minimization algorithm for solving such tensor learning and coding model and show that the sequence generated by the algorithm can globally converge to a critical point. Extensive experimental results for real datasets such as videos, hyperspectral images, and traffic data are reported to demonstrate these advantages and show that the performance of the proposed tensor learning and coding method is significantly better than the other tensor completion methods in terms of several evaluation metrics.
Collapse
|
8
|
Wu F, Li C, Li Y, Tang N. Robust Low-Rank Tensor Completion via New Regularized Model with Approximate SVD. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
9
|
Bayesian robust tensor completion via CP decomposition. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
10
|
Xue J, Zhao Y, Huang S, Liao W, Chan JCW, Kong SG. Multilayer Sparsity-Based Tensor Decomposition for Low-Rank Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6916-6930. [PMID: 34143740 DOI: 10.1109/tnnls.2021.3083931] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Existing methods for tensor completion (TC) have limited ability for characterizing low-rank (LR) structures. To depict the complex hierarchical knowledge with implicit sparsity attributes hidden in a tensor, we propose a new multilayer sparsity-based tensor decomposition (MLSTD) for the low-rank tensor completion (LRTC). The method encodes the structured sparsity of a tensor by the multiple-layer representation. Specifically, we use the CANDECOMP/PARAFAC (CP) model to decompose a tensor into an ensemble of the sum of rank-1 tensors, and the number of rank-1 components is easily interpreted as the first-layer sparsity measure. Presumably, the factor matrices are smooth since local piecewise property exists in within-mode correlation. In subspace, the local smoothness can be regarded as the second-layer sparsity. To describe the refined structures of factor/subspace sparsity, we introduce a new sparsity insight of subspace smoothness: a self-adaptive low-rank matrix factorization (LRMF) scheme, called the third-layer sparsity. By the progressive description of the sparsity structure, we formulate an MLSTD model and embed it into the LRTC problem. Then, an effective alternating direction method of multipliers (ADMM) algorithm is designed for the MLSTD minimization problem. Various experiments in RGB images, hyperspectral images (HSIs), and videos substantiate that the proposed LRTC methods are superior to state-of-the-art methods.
Collapse
|
11
|
Shi Q, Cheung YM, Lou J. Robust Tensor SVD and Recovery With Rank Estimation. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10667-10682. [PMID: 33872172 DOI: 10.1109/tcyb.2021.3067676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Tensor singular value decomposition (t-SVD) has recently become increasingly popular for tensor recovery under partial and/or corrupted observations. However, the existing t -SVD-based methods neither make use of a rank prior nor provide an accurate rank estimation (RE), which would limit their recovery performance. From the practical perspective, the tensor RE problem is nontrivial and difficult to solve. In this article, we, therefore, aim to determine the correct rank of an intrinsic low-rank tensor from corrupted observations based on t-SVD and further improve recovery results with the estimated rank. Specifically, we first induce the equivalence of the tensor nuclear norm (TNN) of a tensor and its f -diagonal tensor. We then simultaneously minimize the reconstruction error and TNN of the f -diagonal tensor, leading to RE. Subsequently, we relax our model by removing the TNN regularizer to improve the recovery performance. Furthermore, we consider more general cases in the presence of missing data and/or gross corruptions by proposing robust tensor principal component analysis and robust tensor completion with RE. The robust methods can achieve successful recovery by refining the models with correct estimated ranks. Experimental results show that the proposed methods outperform the state-of-the-art methods with significant improvements.
Collapse
|
12
|
Online subspace learning and imputation by Tensor-Ring decomposition. Neural Netw 2022; 153:314-324. [PMID: 35772252 DOI: 10.1016/j.neunet.2022.05.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/31/2022] [Accepted: 05/24/2022] [Indexed: 11/21/2022]
Abstract
This paper considers the completion problem of a partially observed high-order streaming data, which is cast as an online low-rank tensor completion problem. Though the online low-rank tensor completion problem has drawn lots of attention in recent years, most of them are designed based on the traditional decomposition method, such as CP and Tucker. Inspired by the advantages of Tensor Ring decomposition over the traditional decompositions in expressing high-order data and its superiority in missing values estimation, this paper proposes two online subspace learning and imputation methods based on Tensor Ring decomposition. Specifically, we first propose an online Tensor Ring subspace learning and imputation model by formulating an exponentially weighted least squares with Frobenium norm regularization of TR-cores. Then, two commonly used optimization algorithms, i.e. alternating recursive least squares and stochastic-gradient algorithms, are developed to solve the proposed model. Numerical experiments show that the proposed methods are more effective to exploit the time-varying subspace in comparison with the conventional Tensor Ring completion methods. Besides, the proposed methods are demonstrated to be superior to obtain better results than state-of-the-art online methods in streaming data completion under varying missing ratios and noise.
Collapse
|
13
|
Xie M, Liu X, Yang X. A Nonlocal Self-Similarity-Based Weighted Tensor Low-Rank Decomposition for Multichannel Image Completion With Mixture Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; PP:73-87. [PMID: 35544496 DOI: 10.1109/tnnls.2022.3172184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multichannel image completion with mixture noise is a challenging problem in the fields of machine learning, computer vision, image processing, and data mining. Traditional image completion models are not appropriate to deal with this problem directly since their reconstruction priors may mismatch corruption priors. To address this issue, we propose a novel nonlocal self-similarity-based weighted tensor low-rank decomposition (NSWTLD) model that can achieve global optimization and local enhancement. In the proposed model, based on the corruption priors and the reconstruction priors, a pixel weighting strategy is given to characterize the joint effects of missing data, the Gaussian noise, and the impulse noise. To discover and utilize the accurate nonlocal self-similarity information to enhance the restoration quality of the details, the traditional nonlocal learning framework is optimized by employing improved index determination of patch group and handling strip noise caused by patch overlapping. In addition, an efficient and convergent algorithm is presented to solve the NSWTLD model. Comprehensive experiments are conducted on four types of multichannel images under various corruption scenarios. The results demonstrate the efficiency and effectiveness of the proposed model.
Collapse
|
14
|
Unifying tensor factorization and tensor nuclear norm approaches for low-rank tensor completion. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Yu J, Zhou G, Li C, Zhao Q, Xie S. Low Tensor-Ring Rank Completion by Parallel Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3020-3033. [PMID: 32749967 DOI: 10.1109/tnnls.2020.3009210] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Tensor-ring (TR) decomposition has recently attracted considerable attention in solving the low-rank tensor completion (LRTC) problem. However, due to an unbalanced unfolding scheme used during the update of core tensors, the conventional TR-based completion methods usually require a large TR rank to achieve the optimal performance, which leads to high computational cost in practical applications. To overcome this drawback, we propose a new method to exploit the low TR-rank structure in this article. Specifically, we first introduce a balanced unfolding operation called tensor circular unfolding, by which the relationship between TR rank and the ranks of tensor unfoldings is theoretically established. Using this new unfolding operation, we further propose an algorithm to exploit the low TR-rank structure by performing parallel low-rank matrix factorizations to all circularly unfolded matrices. To tackle the problem of nonuniform missing patterns, we apply a row weighting trick to each circularly unfolded matrix, which significantly improves the adaptive ability to various types of missing patterns. The extensive experiments have demonstrated that the proposed algorithm can achieve outstanding performance using a much smaller TR rank compared with the conventional TR-based completion algorithms; meanwhile, the computational cost is reduced substantially.
Collapse
|
16
|
Zhou Y, Cheung YM. Bayesian Low-Tubal-Rank Robust Tensor Factorization with Multi-Rank Determination. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:62-76. [PMID: 31226066 DOI: 10.1109/tpami.2019.2923240] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Robust tensor factorization is a fundamental problem in machine learning and computer vision, which aims at decomposing tensors into low-rank and sparse components. However, existing methods either suffer from limited modeling power in preserving low-rank structures, or have difficulties in determining the target tensor rank and the trade-off between the low-rank and sparse components. To address these problems, we propose a fully Bayesian treatment of robust tensor factorization along with a generalized sparsity-inducing prior. By adapting the recently proposed low-tubal-rank model in a generative manner, our method is effective in preserving low-rank structures. Moreover, benefiting from the proposed prior and the Bayesian framework, the proposed method can automatically determine the tensor rank while inferring the trade-off between the low-rank and sparse components. For model estimation, we develop a variational inference algorithm, and further improve its efficiency by reformulating the variational updates in the frequency domain. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method in multi-rank determination as well as its superiority in image denoising and background modeling over state-of-the-art approaches.
Collapse
|
17
|
Phan AH, Cichocki A, Uschmajew A, Tichavsky P, Luta G, Mandic DP. Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4622-4636. [PMID: 32031950 DOI: 10.1109/tnnls.2019.2956926] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network (TN) decomposition has been long studied in quantum physics and scientific computing. In this article, we present novel algorithms and applications of TN decompositions, with a particular focus on the tensor train (TT) decomposition and its variants. The novel algorithms developed for the TT decomposition update, in an alternating way, one or several core tensors at each iteration and exhibit enhanced mathematical tractability and scalability for large-scale data tensors. For rigor, the cases of the given ranks, given approximation error, and the given error bound are all considered. The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.
Collapse
|
18
|
Xue J, Zhao Y, Liao W, Chan JCW, Kong SG. Enhanced Sparsity Prior Model for Low-Rank Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4567-4581. [PMID: 31880566 DOI: 10.1109/tnnls.2019.2956153] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional tensor completion (TC) methods generally assume that the sparsity of tensor-valued data lies in the global subspace. The so-called global sparsity prior is measured by the tensor nuclear norm. Such assumption is not reliable in recovering low-rank (LR) tensor data, especially when considerable elements of data are missing. To mitigate this weakness, this article presents an enhanced sparsity prior model for LRTC using both local and global sparsity information in a latent LR tensor. In specific, we adopt a doubly weighted strategy for nuclear norm along each mode to characterize global sparsity prior of tensor. Different from traditional tensor-based local sparsity description, the proposed factor gradient sparsity prior in the Tucker decomposition model describes the underlying subspace local smoothness in real-world tensor objects, which simultaneously characterizes local piecewise structure over all dimensions. Moreover, there is no need to minimize the rank of a tensor for the proposed local sparsity prior. Extensive experiments on synthetic data, real-world hyperspectral images, and face modeling data demonstrate that the proposed model outperforms state-of-the-art techniques in terms of prediction capability and efficiency.
Collapse
|
19
|
Xu Y, Wu Z, Chanussot J, Wei Z. Hyperspectral Images Super-Resolution via Learning High-Order Coupled Tensor Ring Representation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4747-4760. [PMID: 31902776 DOI: 10.1109/tnnls.2019.2957527] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Hyperspectral image (HSI) super-resolution is a hot topic in remote sensing and computer vision. Recently, tensor analysis has been proven to be an efficient technology for HSI image processing. However, the existing tensor-based methods of HSI super-resolution are not able to capture the high-order correlations in HSI. In this article, we propose to learn a high-order coupled tensor ring (TR) representation for HSI super-resolution. The proposed method first tensorizes the HSI to be estimated into a high-order tensor in which multiscale spatial structures and the original spectral structure are represented. Then, a coupled TR representation model is proposed to fuse the low-resolution HSI (LR-HSI) and high-resolution multispectral image (HR-MSI). In the proposed model, some latent core tensors in TR of the LR-HSI and the HR-MSI are shared, and we use the relationship between the spectral core tensors to reconstruct the HSI. In addition, the graph-Laplacian regularization is introduced to the spectral core tensors to preserve the spectral information. To enhance the robustness of the proposed model, Frobenius norm regularizations are introduced to the other core tensors. Experimental results on both synthetic and real data sets show that the proposed method achieves the state-of-the-art super-resolution performance.
Collapse
|
20
|
Phan AH, Cichocki A, Oseledets I, Calvi GG, Ahmadi-Asl S, Mandic DP. Tensor Networks for Latent Variable Analysis: Higher Order Canonical Polyadic Decomposition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2174-2188. [PMID: 31449033 DOI: 10.1109/tnnls.2019.2929063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The canonical polyadic decomposition (CPD) is a convenient and intuitive tool for tensor factorization; however, for higher order tensors, it often exhibits high computational cost and permutation of tensor entries, and these undesirable effects grow exponentially with the tensor order. Prior compression of tensor in-hand can reduce the computational cost of CPD, but this is only applicable when the rank R of the decomposition does not exceed the tensor dimensions. To resolve these issues, we present a novel method for CPD of higher order tensors, which rests upon a simple tensor network of representative inter-connected core tensors of orders not higher than 3. For rigor, we develop an exact conversion scheme from the core tensors to the factor matrices in CPD and an iterative algorithm of low complexity to estimate these factor matrices for the inexact case. Comprehensive simulations over a variety of scenarios support the proposed approach.
Collapse
|
21
|
Remote Sensing Image Denoising via Low-Rank Tensor Approximation and Robust Noise Modeling. REMOTE SENSING 2020. [DOI: 10.3390/rs12081278] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Noise removal is a fundamental problem in remote sensing image processing. Most existing methods, however, have not yet attained sufficient robustness in practice, due to more or less neglecting the intrinsic structures of remote sensing images and/or underestimating the complexity of realistic noise. In this paper, we propose a new remote sensing image denoising method by integrating intrinsic image characterization and robust noise modeling. Specifically, we use low-Tucker-rank tensor approximation to capture the global multi-factor correlation within the underlying image, and adopt a non-identical and non-independent distributed mixture of Gaussians (non-i.i.d. MoG) assumption to encode the statistical configurations of the embedded noise. Then, we incorporate the proposed image and noise priors into a full Bayesian generative model and design an efficient variational Bayesian algorithm to infer all involved variables by closed-form equations. Moreover, adaptive strategies for the selection of hyperparameters are further developed to make our algorithm free from burdensome hyperparameter-tuning. Extensive experiments on both simulated and real multispectral/hyperspectral images demonstrate the superiority of the proposed method over the compared state-of-the-art ones.
Collapse
|
22
|
|
23
|
Dian R, Li S, Fang L. Learning a Low Tensor-Train Rank Representation for Hyperspectral Image Super-Resolution. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2672-2683. [PMID: 30624229 DOI: 10.1109/tnnls.2018.2885616] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Hyperspectral images (HSIs) with high spectral resolution only have the low spatial resolution. On the contrary, multispectral images (MSIs) with much lower spectral resolution can be obtained with higher spatial resolution. Therefore, fusing the high-spatial-resolution MSI (HR-MSI) with low-spatial-resolution HSI of the same scene has become the very popular HSI super-resolution scheme. In this paper, a novel low tensor-train (TT) rank (LTTR)-based HSI super-resolution method is proposed, where an LTTR prior is designed to learn the correlations among the spatial, spectral, and nonlocal modes of the nonlocal similar high-spatial-resolution HSI (HR-HSI) cubes. First, we cluster the HR-MSI cubes as many groups based on their similarities, and the HR-HSI cubes are also clustered according to the learned cluster structure in the HR-MSI cubes. The HR-HSI cubes in each group are much similar to each other and can constitute a 4-D tensor, whose four modes are highly correlated. Therefore, we impose the LTTR constraint on these 4-D tensors, which can effectively learn the correlations among the spatial, spectral, and nonlocal modes because of the well-balanced matricization scheme of TT rank. We formulate the super-resolution problem as TT rank regularized optimization problem, which is solved via the scheme of alternating direction method of multipliers. Experiments on HSI data sets indicate the effectiveness of the LTTR-based method.
Collapse
|
24
|
Shi Q, Cheung YM, Zhao Q, Lu H. Feature Extraction for Incomplete Data Via Low-Rank Tensor Decomposition With Feature Regularization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1803-1817. [PMID: 30371391 DOI: 10.1109/tnnls.2018.2873655] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Multidimensional data (i.e., tensors) with missing entries are common in practice. Extracting features from incomplete tensors is an important yet challenging problem in many fields such as machine learning, pattern recognition, and computer vision. Although the missing entries can be recovered by tensor completion techniques, these completion methods focus only on missing data estimation instead of effective feature extraction. To the best of our knowledge, the problem of feature extraction from incomplete tensors has yet to be well explored in the literature. In this paper, we therefore tackle this problem within the unsupervised learning environment. Specifically, we incorporate low-rank tensor decomposition with feature variance maximization (TDVM) in a unified framework. Based on orthogonal Tucker and CP decompositions, we design two TDVM methods, TDVM-Tucker and TDVM-CP, to learn low-dimensional features viewing the core tensors of the Tucker model as features and viewing the weight vectors of the CP model as features. TDVM explores the relationship among data samples via maximizing feature variance and simultaneously estimates the missing entries via low-rank Tucker/CP approximation, leading to informative features extracted directly from observed entries. Furthermore, we generalize the proposed methods by formulating a general model that incorporates feature regularization into low-rank tensor approximation. In addition, we develop a joint optimization scheme to solve the proposed methods by integrating the alternating direction method of multipliers with the block coordinate descent method. Finally, we evaluate our methods on six real-world image and video data sets under a newly designed multiblock missing setting. The extracted features are evaluated in face recognition, object/action classification, and face/gait clustering. Experimental results demonstrate the superior performance of the proposed methods compared with the state-of-the-art approaches.
Collapse
|