1
|
Li Y, Tan J, Yang P, Zhou G, Zhao Q. Low-Rank, High-Order Tensor Completion via t- Product-Induced Tucker (tTucker) Decomposition. Neural Comput 2025; 37:1171-1192. [PMID: 40262741 DOI: 10.1162/neco_a_01756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 02/03/2025] [Indexed: 04/24/2025]
Abstract
Recently, tensor singular value decomposition (t-SVD)-based methods were proposed to solve the low-rank tensor completion (LRTC) problem, which has achieved unprecedented success on image and video inpainting tasks. The t-SVD is limited to process third-order tensors. When faced with higher-order tensors, it reshapes them into third-order tensors, leading to the destruction of interdimensional correlations. To address this limitation, this letter introduces a tproductinduced Tucker decomposition (tTucker) model that replaces the mode product in Tucker decomposition with t-product, which jointly extends the ideas of t-SVD and high-order SVD. This letter defines the rank of the tTucker decomposition and presents an LRTC model that minimizes the induced Schatten-p norm. An efficient alternating direction multiplier method (ADMM) algorithm is developed to optimize the proposed LRTC model, and its effectiveness is demonstrated through experiments conducted on both synthetic and real data sets, showcasing excellent performance.
Collapse
Affiliation(s)
- Yaodong Li
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P.R.C
- Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, P.R.C.
| | - Jun Tan
- School of Automation, Guangdong University of Technology, Guangzhou 51000, P.R.C.
| | - Peilin Yang
- School of Automation, Guangdong University of Technology, Guangzhou 51000, P.R.C.
| | - Guoxu Zhou
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P.R.C
- Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou 510006, P.R.C.
| | - Qibin Zhao
- RIKEN Center for Advanced Intelligence Project Wako-shi, 351-0198 Japan
- School of Automation, Guangdong University of Technology, Guangzhou 510006, P.R.C.
| |
Collapse
|
2
|
Yang Z, Yang LT, Wang H, Zhao H, Liu D. Bayesian Nonnegative Tensor Completion With Automatic Rank Determination. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:2036-2051. [PMID: 40053614 DOI: 10.1109/tip.2024.3459647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2025]
Abstract
Nonnegative CANDECOMP/PARAFAC (CP) factorization of incomplete tensors is a powerful technique for finding meaningful and physically interpretable latent factor matrices to achieve nonnegative tensor completion. However, most existing nonnegative CP models rely on manually predefined tensor ranks, which introduces uncertainty and leads the models to overfit or underfit. Although the presence of CP models within the probabilistic framework can estimate rank better, they lack the ability to learn nonnegative factors from incomplete data. In addition, existing approaches tend to focus on point estimation and ignore estimating uncertainty. To address these issues within a unified framework, we propose a fully Bayesian treatment of nonnegative tensor completion with automatic rank determination. Benefitting from the Bayesian framework and the hierarchical sparsity-inducing priors, the model can provide uncertainty estimates of nonnegative latent factors and effectively obtain low-rank structures from incomplete tensors. Additionally, the proposed model can mitigate problems of parameter selection and overfitting. For model learning, we develop two fully Bayesian inference methods for posterior estimation and propose a hybrid computing strategy that reduces the time overhead for large-scale data significantly. Extensive simulations on synthetic data demonstrate that our model can recover missing data with high precision and automatically estimate CP rank from incomplete tensors. Moreover, results from real-world applications demonstrate that our model is superior to state-of-the-art methods in image and video inpainting. The code is available at https://github.com/zecanyang/BNTC.
Collapse
|
3
|
Qiu Y, Zhou G, Wang A, Zhao Q, Xie S. Balanced Unfolding Induced Tensor Nuclear Norms for High-Order Tensor Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:4724-4737. [PMID: 38656849 DOI: 10.1109/tnnls.2024.3373384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
The recently proposed tensor tubal rank has been witnessed to obtain extraordinary success in real-world tensor data completion. However, existing works usually fix the transform orientation along the third mode and may fail to turn multidimensional low-tubal-rank structure into account. To alleviate these bottlenecks, we introduce two unfolding induced tensor nuclear norms (TNNs) for the tensor completion (TC) problem, which naturally extends tensor tubal rank to high-order data. Specifically, we show how multidimensional low-tubal-rank structure can be captured by utilizing a novel balanced unfolding strategy, upon which two TNNs, namely, overlapped TNN (OTNN) and latent TNN (LTNN), are developed. We also show the immediate relationship between the tubal rank of unfolding tensor and the existing tensor network (TN) rank, e.g., CANDECOMP/PARAFAC (CP) rank, Tucker rank, and tensor ring (TR) rank, to demonstrate its efficiency and practicality. Two efficient TC models are then proposed with theoretical guarantees by analyzing a unified nonasymptotic upper bound. To solve optimization problems, we develop two alternating direction methods of multipliers (ADMM) based algorithms. The proposed models have been demonstrated to exhibit superior performance based on experimental findings involving synthetic and real-world tensors, including facial images, light field images, and video sequences.
Collapse
|
4
|
Sirpal P, Sikora WA, Refai HH. Multiscale neural dynamics in sleep transition volatility across age scales: a multimodal EEG-EMG-EOG analysis of temazepam effects. GeroScience 2025; 47:205-226. [PMID: 39276251 PMCID: PMC11872868 DOI: 10.1007/s11357-024-01342-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 09/05/2024] [Indexed: 09/16/2024] Open
Abstract
Recent advances in computational modeling techniques have facilitated a more nuanced understanding of sleep neural dynamics across the lifespan. In this study, we tensorize multiscale multimodal electroencephalogram (EEG), electromyogram (EMG), and electrooculogram (EOG) signals and apply Generalized Autoregressive Conditional Heteroskedasticity (GARCH) modeling to quantify interactions between age scales and the use of pharmacological sleep aids on sleep stage transitions. Our cohort consists of 22 subjects in a crossover design study, where each subject received both a sleep aid and a placebo in different sessions. To understand these effects across the lifespan, three evenly distributed age groups were formed: 18-29, 30-49, and 50-66 years. The methodological framework implemented here employs tensor-based machine learning techniques to compute continuous wavelet transform time-frequency features and utilizes a GARCH model to quantify sleep signal volatility across age scales. Support Vector Machines are used for feature ranking, and our analysis captures interactions between signal entropy, age, and sleep aid status across frequency bands, sleep transitions, and sleep stages. GARCH model results reveal statistically significant volatility clustering in EEG, EMG, and EOG signals, particularly during transitions between REM and non-REM sleep. Notably, volatility was higher in the 50-66 age group compared to the 18-29 age group, with marked fluctuations during transitions from deep sleep to REM sleep (standard deviation of 0.35 in the older group vs. 0.30 in the 18-29 age group, p < 0.05). Statistical comparisons of volatility across frequency bands, age scales, and sleep stages highlight significant differences attributable to sleep aid use. Mean conditional volatility parameterization of the GARCH model reveals directional influences, with a causality index of 0.75 from frontal to occipital regions during REM sleep transition periods. Our methodological framework identifies distinct neural behavior patterns across age groups associated with each sleep stage and transition, offering insights into the development of targeted interventions for sleep regularity across the lifespan.
Collapse
Affiliation(s)
- Parikshat Sirpal
- School of Electrical and Computer Engineering, University of Oklahoma, Gallogly College of Engineering, Norman, OK, 73019, USA.
| | - William A Sikora
- School of Biomedical Engineering, University of Oklahoma, Gallogly College of Engineering, Norman, OK, 73019, USA
| | - Hazem H Refai
- School of Electrical and Computer Engineering, University of Oklahoma, Gallogly College of Engineering, Norman, OK, 73019, USA
- School of Biomedical Engineering, University of Oklahoma, Gallogly College of Engineering, Norman, OK, 73019, USA
| |
Collapse
|
5
|
Lv W, Zhang C, Li H, Jia X, Chen C. Joint Projection Learning and Tensor Decomposition-Based Incomplete Multiview Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:17559-17570. [PMID: 37639411 DOI: 10.1109/tnnls.2023.3306006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Incomplete multiview clustering (IMVC) has received increasing attention since it is often that some views of samples are incomplete in reality. Most existing methods learn similarity subgraphs from original incomplete multiview data and seek complete graphs by exploring the incomplete subgraphs of each view for spectral clustering. However, the graphs constructed on the original high-dimensional data may be suboptimal due to feature redundancy and noise. Besides, previous methods generally ignored the graph noise caused by the interclass and intraclass structure variation during the transformation of incomplete graphs and complete graphs. To address these problems, we propose a novel joint projection learning and tensor decomposition (JPLTD)-based method for IMVC. Specifically, to alleviate the influence of redundant features and noise in high-dimensional data, JPLTD introduces an orthogonal projection matrix to project the high-dimensional features into a lower-dimensional space for compact feature learning. Meanwhile, based on the lower-dimensional space, the similarity graphs corresponding to instances of different views are learned, and JPLTD stacks these graphs into a third-order low-rank tensor to explore the high-order correlations across different views. We further consider the graph noise of projected data caused by missing samples and use a tensor-decomposition-based graph filter for robust clustering. JPLTD decomposes the original tensor into an intrinsic tensor and a sparse tensor. The intrinsic tensor models the true data similarities. An effective optimization algorithm is adopted to solve the JPLTD model. Comprehensive experiments on several benchmark datasets demonstrate that JPLTD outperforms the state-of-the-art methods. The code of JPLTD is available at https://github.com/weilvNJU/JPLTD.
Collapse
|
6
|
Wu T, Fan J. Smooth Tensor Product for Tensor Completion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:6483-6496. [PMID: 39527431 DOI: 10.1109/tip.2024.3489272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Low-rank tensor completion (LRTC) has shown promise in processing incomplete visual data, yet it often overlooks the inherent local smooth structures in images and videos. Recent advances in LRTC, integrating total variation regularization to capitalize on the local smoothness, have yielded notable improvements. Nonetheless, these methods are limited to exploiting local smoothness within the original data space, neglecting the latent factor space of tensors. More seriously, there is a lack of theoretical backing for the role of local smoothness in enhancing recovery performance. In response, this paper introduces an innovative tensor completion model that concurrently leverages the global low-rank structure of the original tensor and the local smooth structure of its factor tensors. Our objective is to learn a low-rank tensor that decomposes into two factor tensors, each exhibiting sufficient local smoothness. We propose an efficient alternating direction method of multipliers to optimize our model. Further, we establish generalization error bounds for smooth factor-based tensor completion methods across various decomposition frameworks. These bounds are significantly tighter than existing baselines. We conduct extensive inpainting experiments on color images, multispectral images, and videos, which demonstrate the efficacy and superiority of our method. Additionally, our approach shows a low sensitivity to hyper-parameter settings, enhancing its convenience and reliability for practical applications.
Collapse
|
7
|
He C, Xu Y, Wu Z, Zheng S, Wei Z. Multi-Dimensional Visual Data Restoration: Uncovering the Global Discrepancy in Transformed High-Order Tensor Singular Values. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:6409-6424. [PMID: 39495680 DOI: 10.1109/tip.2024.3475738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2024]
Abstract
The recently proposed high-order tensor algebraic framework generalizes the tensor singular value decomposition (t-SVD) induced by the invertible linear transform from order-3 to order-d ( ). However, the derived order-d t-SVD rank essentially ignores the implicit global discrepancy in the quantity distribution of non-zero transformed high-order singular values across the higher modes of tensors. This oversight leads to suboptimal restoration in processing real-world multi-dimensional visual datasets. To address this challenge, in this study, we look in-depth at the intrinsic properties of practical visual data tensors, and put our efforts into faithfully measuring their high-order low-rank nature. Technically, we first present a novel order-d tensor rank definition. This rank function effectively captures the aforementioned discrepancy property observed in real visual data tensors and is thus called the discrepant t-SVD rank. Subsequently, we introduce a nonconvex regularizer to facilitate the construction of the corresponding discrepant t-SVD rank minimization regime. The results show that the investigated low-rank approximation has the closed-form solution and avoids dilemmas caused by the previous convex optimization approach. Based on this new regime, we meticulously develop two models for typical restoration tasks: high-order tensor completion and high-order tensor robust principal component analysis. Numerical examples on order-4 hyperspectral videos, order-4 color videos, and order-5 light field images substantiate that our methods outperform state-of-the-art tensor-represented competitors. Finally, taking a fundamental order-3 hyperspectral tensor restoration task as an example, we further demonstrate the effectiveness of our new rank minimization regime for more practical applications. The source codes of the proposed methods are available at https://github.com/CX-He/DTSVD.git.
Collapse
|
8
|
He Y, Atia GK. Robust Low-Tubal-Rank Tensor Completion Based on Tensor Factorization and Maximum Correntopy Criterion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14603-14617. [PMID: 37279124 DOI: 10.1109/tnnls.2023.3280086] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The goal of tensor completion is to recover a tensor from a subset of its entries, often by exploiting its low-rank property. Among several useful definitions of tensor rank, the low tubal rank was shown to give a valuable characterization of the inherent low-rank structure of a tensor. While some low-tubal-rank tensor completion algorithms with favorable performance have been recently proposed, these algorithms utilize second-order statistics to measure the error residual, which may not work well when the observed entries contain large outliers. In this article, we propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers. To efficiently optimize the proposed objective, we leverage a half-quadratic minimization technique whereby the optimization is transformed to a weighted low-tubal-rank tensor factorization problem. Subsequently, we propose two simple and efficient algorithms to obtain the solution and provide their convergence and complexity analysis. Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.
Collapse
|
9
|
Sedighin F. Tensor Methods in Biomedical Image Analysis. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:16. [PMID: 39100745 PMCID: PMC11296571 DOI: 10.4103/jmss.jmss_55_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/20/2023] [Accepted: 12/28/2023] [Indexed: 08/06/2024]
Abstract
In the past decade, tensors have become increasingly attractive in different aspects of signal and image processing areas. The main reason is the inefficiency of matrices in representing and analyzing multimodal and multidimensional datasets. Matrices cannot preserve the multidimensional correlation of elements in higher-order datasets and this highly reduces the effectiveness of matrix-based approaches in analyzing multidimensional datasets. Besides this, tensor-based approaches have demonstrated promising performances. These together, encouraged researchers to move from matrices to tensors. Among different signal and image processing applications, analyzing biomedical signals and images is of particular importance. This is due to the need for extracting accurate information from biomedical datasets which directly affects patient's health. In addition, in many cases, several datasets have been recorded simultaneously from a patient. A common example is recording electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) of a patient with schizophrenia. In such a situation, tensors seem to be among the most effective methods for the simultaneous exploitation of two (or more) datasets. Therefore, several tensor-based methods have been developed for analyzing biomedical datasets. Considering this reality, in this paper, we aim to have a comprehensive review on tensor-based methods in biomedical image analysis. The presented study and classification between different methods and applications can show the importance of tensors in biomedical image enhancement and open new ways for future studies.
Collapse
Affiliation(s)
- Farnaz Sedighin
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
10
|
Wu Y, Jin Y. Efficient enhancement of low-rank tensor completion via thin QR decomposition. Front Big Data 2024; 7:1382144. [PMID: 39015435 PMCID: PMC11250652 DOI: 10.3389/fdata.2024.1382144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 04/30/2024] [Indexed: 07/18/2024] Open
Abstract
Low-rank tensor completion (LRTC), which aims to complete missing entries from tensors with partially observed terms by utilizing the low-rank structure of tensors, has been widely used in various real-world issues. The core tensor nuclear norm minimization (CTNM) method based on Tucker decomposition is one of common LRTC methods. However, the CTNM methods based on Tucker decomposition often have a large computing cost due to the fact that the general factor matrix solving technique involves multiple singular value decompositions (SVDs) in each loop. To address this problem, this article enhances the method and proposes an effective CTNM method based on thin QR decomposition (CTNM-QR) with lower computing complexity. The proposed method extends the CTNM by introducing tensor versions of the auxiliary variables instead of matrices, while using the thin QR decomposition to solve the factor matrix rather than the SVD, which can save the computational complexity and improve the tensor completion accuracy. In addition, the CTNM-QR method's convergence and complexity are analyzed further. Numerous experiments in synthetic data, real color images, and brain MRI data at different missing rates demonstrate that the proposed method not only outperforms in terms of completion accuracy and visualization, but also conducts more efficiently than most state-of-the-art LRTC methods.
Collapse
Affiliation(s)
| | - Yunzhi Jin
- Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming, China
| |
Collapse
|
11
|
Xie M, Liu X, Yang X, Cai W. Multichannel Image Completion With Mixture Noise: Adaptive Sparse Low-Rank Tensor Subspace Meets Nonlocal Self-Similarity. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7521-7534. [PMID: 35580099 DOI: 10.1109/tcyb.2022.3169800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multichannel image completion with mixture noise is a common but complex problem in the fields of machine learning, image processing, and computer vision. Most existing algorithms devote to explore global low-rank information and fail to optimize local and joint-mode structures, which may lead to oversmooth restoration results or lower quality restoration details. In this study, we propose a novel model to deal with multichannel image completion with mixture noise based on adaptive sparse low-rank tensor subspace and nonlocal self-similarity (ASLTS-NS). In the proposed model, a nonlocal similar patch matching framework cooperating with Tucker decomposition is used to explore information of global and joint modes and optimize the local structure for improving restoration quality. In order to enhance the robustness of low-rank decomposition to data missing and mixture noise, we present an adaptive sparse low-rank regularization to construct robust tensor subspace for self-weighing importance of different modes and capturing a stable inherent structure. In addition, joint tensor Frobenius and l1 regularizations are exploited to control two different types of noise. Based on alternating directions method of multipliers (ADMM), a convergent learning algorithm is designed to solve this model. Experimental results on three different types of multichannel image sets demonstrate the advantages of ASLTS-NS under five complex scenarios.
Collapse
|
12
|
Yu J, Zhou G, Sun W, Xie S. Robust to Rank Selection: Low-Rank Sparse Tensor-Ring Completion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2451-2465. [PMID: 34478384 DOI: 10.1109/tnnls.2021.3106654] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Tensor-ring (TR) decomposition was recently studied and applied for low-rank tensor completion due to its powerful representation ability of high-order tensors. However, most of the existing TR-based methods tend to suffer from deterioration when the selected rank is larger than the true one. To address this issue, this article proposes a new low-rank sparse TR completion method by imposing the Frobenius norm regularization on its latent space. Specifically, we theoretically establish that the proposed method is capable of exploiting the low rankness and Kronecker-basis-representation (KBR)-based sparsity of the target tensor using the Frobenius norm of latent TR-cores. We optimize the proposed TR completion by block coordinate descent (BCD) algorithm and design a modified TR decomposition for the initialization of this algorithm. Extensive experimental results on synthetic data and visual data have demonstrated that the proposed method is able to achieve better results compared to the conventional TR-based completion methods and other state-of-the-art methods and, meanwhile, is quite robust even if the selected TR-rank increases.
Collapse
|
13
|
Deng L, Xiao M. A New Automatic Hyperparameter Recommendation Approach Under Low-Rank Tensor Completion e Framework. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4038-4050. [PMID: 35914040 DOI: 10.1109/tpami.2022.3195658] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Hyperparameter optimization (HPO), characterized by hyperparameter tuning, is not only a critical step for effective modeling but also is the most time-consuming process in machine learning. Traditional search-based algorithms tend to require extensive configuration evaluations for each round to select the desirable hyperparameters during the process, and they are often very inefficient for the implementations on large-scale tasks. In this paper, we study the HPO problem via meta-learning (MtL) approach under the low-rank tensor completion (LRTC) framework. Our proposed approach predicts the performance for hyperparameters of new problems based on their previous performance so that the underlying suitable hyperparameters with better efficiency can be attained. Different from existing approaches, the hyperparameter performance space is instantiated under tensor framework that can preserve the spatial structure and reflect the correlations among the adjacent hyperparameters. When some partial evaluations are available for a new problem, the task of estimating the performance of the unevaluated hyperparameters can be formulated as a tensor completion (TC) problem. Toward the completion purpose, we develop an LRTC algorithm utilizing the sum of nuclear norm (SNN) model. A kernelized version is further developed to capture the nonlinear structure of the performance space. In addition, a corresponding coupled matrix factorization (CMF) algorithm is established to render the predictions solely depend on the meta-features to avoid additional hyperparameter evaluations. Finally, a strategy integrating LRTC and CMF is provided to further enhance the recommendation capacity. We test recommendation performance with our proposed methods for classical SVM and the state-of-the-art deep neural networks such as vision transformer (ViT) and residual network (ResNet), and the obtained results demonstrate the effectiveness of our approaches under various evaluation metrics by comparing with the baselines commonly used for MtL.
Collapse
|
14
|
A general multi-factor norm based low-rank tensor completion framework. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04477-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
15
|
Liu X, Tang G. Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion. SENSORS (BASEL, SWITZERLAND) 2023; 23:1706. [PMID: 36772745 PMCID: PMC9919421 DOI: 10.3390/s23031706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
Many restoration methods use the low-rank constraint of high-dimensional image signals to recover corrupted images. These signals are usually represented by tensors, which can maintain their inherent relevance. The image of this simple tensor presentation has a certain low-rank property, but does not have a strong low-rank property. In order to enhance the low-rank property, we propose a novel method called sub-image based low-rank tensor completion (SLRTC) for image restoration. We first sample a color image to obtain sub-images, and adopt these sub-images instead of the original single image to form a tensor. Then we conduct the mode permutation on this tensor. Next, we exploit the tensor nuclear norm defined based on the tensor-singular value decomposition (t-SVD) to build the low-rank completion model. Finally, we perform the tensor-singular value thresholding (t-SVT) based the standard alternating direction method of multipliers (ADMM) algorithm to solve the aforementioned model. Experimental results have shown that compared with the state-of-the-art tensor completion techniques, the proposed method can provide superior results in terms of objective and subjective assessment.
Collapse
Affiliation(s)
- Xiaohua Liu
- College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Guijin Tang
- Jiangsu Key Laboratory of Image Processing and Image Communication, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| |
Collapse
|
16
|
Zhang H, Zhao XL, Jiang TX, Ng MK, Huang TZ. Multiscale Feature Tensor Train Rank Minimization for Multidimensional Image Recovery. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13395-13410. [PMID: 34543216 DOI: 10.1109/tcyb.2021.3108847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The general tensor-based methods can recover missing values of multidimensional images by exploiting the low-rankness on the pixel level. However, especially when considerable pixels of an image are missing, the low-rankness is not reliable on the pixel level, resulting in some details losing in their results, which hinders the performance of subsequent image applications (e.g., image recognition and segmentation). In this article, we suggest a novel multiscale feature (MSF) tensorization by exploiting the MSFs of multidimensional images, which not only helps to recover the missing values on a higher level, that is, the feature level but also benefits subsequent image applications. By exploiting the low-rankness of the resulting MSF tensor constructed by the new tensorization, we propose the convex and nonconvex MSF tensor train rank minimization (MSF-TT) to conjointly recover the MSF tensor and the corresponding original tensor in a unified framework. We develop the alternating directional method of multipliers (ADMMs) to solve the convex MSF-TT and the proximal alternating minimization (PAM) to solve the nonconvex MSF-TT. Moreover, we establish the theoretical guarantee of convergence for the PAM algorithm. Numerical examples of real-world multidimensional images show that the proposed MSF-TT outperforms other compared approaches in image recovery and the recovered MSF tensor can benefit the subsequent image recognition.
Collapse
|
17
|
Wei P, Wang X, Wei Y. Neural Network Models for Time-Varying Tensor Complementarity Problems. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
18
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
19
|
Shi Q, Cheung YM, Lou J. Robust Tensor SVD and Recovery With Rank Estimation. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10667-10682. [PMID: 33872172 DOI: 10.1109/tcyb.2021.3067676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Tensor singular value decomposition (t-SVD) has recently become increasingly popular for tensor recovery under partial and/or corrupted observations. However, the existing t -SVD-based methods neither make use of a rank prior nor provide an accurate rank estimation (RE), which would limit their recovery performance. From the practical perspective, the tensor RE problem is nontrivial and difficult to solve. In this article, we, therefore, aim to determine the correct rank of an intrinsic low-rank tensor from corrupted observations based on t-SVD and further improve recovery results with the estimated rank. Specifically, we first induce the equivalence of the tensor nuclear norm (TNN) of a tensor and its f -diagonal tensor. We then simultaneously minimize the reconstruction error and TNN of the f -diagonal tensor, leading to RE. Subsequently, we relax our model by removing the TNN regularizer to improve the recovery performance. Furthermore, we consider more general cases in the presence of missing data and/or gross corruptions by proposing robust tensor principal component analysis and robust tensor completion with RE. The robust methods can achieve successful recovery by refining the models with correct estimated ranks. Experimental results show that the proposed methods outperform the state-of-the-art methods with significant improvements.
Collapse
|
20
|
Hou J, Zhang F, Qiu H, Wang J, Wang Y, Meng D. Robust Low-Tubal-Rank Tensor Recovery From Binary Measurements. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4355-4373. [PMID: 33656988 DOI: 10.1109/tpami.2021.3063527] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-rank tensor recovery (LRTR) is a natural extension of low-rank matrix recovery (LRMR) to high-dimensional arrays, which aims to reconstruct an underlying tensor X from incomplete linear measurements [Formula: see text]. However, LRTR ignores the error caused by quantization, limiting its application when the quantization is low-level. In this work, we take into account the impact of extreme quantization and suppose the quantizer degrades into a comparator that only acquires the signs of [Formula: see text]. We still hope to recover X from these binary measurements. Under the tensor Singular Value Decomposition (t-SVD) framework, two recovery methods are proposed-the first is a tensor hard singular tube thresholding method; the second is a constrained tensor nuclear norm minimization method. These methods can recover a real n1×n2×n3 tensor X with tubal rank r from m random Gaussian binary measurements with errors decaying at a polynomial speed of the oversampling factor λ:=m/((n1+n2)n3r). To improve the convergence rate, we develop a new quantization scheme under which the convergence rate can be accelerated to an exponential function of λ. Numerical experiments verify our results, and the applications to real-world data demonstrate the promising performance of the proposed methods.
Collapse
|
21
|
Tian X, Xie K, Zhang H. A Low-Rank Tensor Decomposition Model With Factors Prior and Total Variation for Impulsive Noise Removal. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4776-4789. [PMID: 35482697 DOI: 10.1109/tip.2022.3169694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Image restoration is a long-standing problem in signal processing and low-level computer vision. Previous studies have shown that imposing a low-rank Tucker decomposition (TKD) constraint could produce impressive performances. However, the TKD-based schemes may lead to the overfitting/underfitting problem because of incorrectly predefined ranks. To address this issue, we prove that the n -rank is upper bounded by the rank of each Tucker factor matrix. Using this relationship, we propose a formulation by imposing the nuclear norm regularization on the latent factors of TKD, which can avoid the burden of rank selection and reduce the computational cost when dealing with large-scale tensors. In this formulation, we adopt the Minimax Concave Penalty to remove the impulsive noise instead of the l1 -norm which may deviate from both the data-acquisition model and the prior model. Moreover, we employ an anisotropic total variation regularization to explore the piecewise smooth structure in both spatial and spectral domains. To solve this problem, we design the symmetric Gauss-Seidel (sGS) based alternating direction method of multipliers (ADMM) algorithm. Compared to the directly extended ADMM, our algorithm can achieve higher accuracy since more structural information is utilized. Finally, we conduct experiments on the three kinds of datasets, numerical results demonstrate the superiority of the proposed method, especially, the average PSNR of the proposed method can improve about 1~5dB for each noise level of color images.
Collapse
|
22
|
Luo YS, Zhao XL, Jiang TX, Chang Y, Ng MK, Li C. Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for Multi-Dimensional Image Recovery. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:3793-3808. [PMID: 35609097 DOI: 10.1109/tip.2022.3176220] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recently, transform-based tensor nuclear norm (TNN) minimization methods have received increasing attention for recovering third-order tensors in multi-dimensional imaging problems. The main idea of these methods is to perform the linear transform along the third mode of third-order tensors and then minimize the nuclear norm of frontal slices of the transformed tensor. The main aim of this paper is to propose a nonlinear multilayer neural network to learn a nonlinear transform by solely using the observed tensor in a self-supervised manner. The proposed network makes use of the low-rank representation of the transformed tensor and data-fitting between the observed tensor and the reconstructed tensor to learn the nonlinear transform. Extensive experimental results on different data and different tasks including tensor completion, background subtraction, robust tensor completion, and snapshot compressive imaging demonstrate the superior performance of the proposed method over state-of-the-art methods.
Collapse
|
23
|
Xie M, Liu X, Yang X. A Nonlocal Self-Similarity-Based Weighted Tensor Low-Rank Decomposition for Multichannel Image Completion With Mixture Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; PP:73-87. [PMID: 35544496 DOI: 10.1109/tnnls.2022.3172184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multichannel image completion with mixture noise is a challenging problem in the fields of machine learning, computer vision, image processing, and data mining. Traditional image completion models are not appropriate to deal with this problem directly since their reconstruction priors may mismatch corruption priors. To address this issue, we propose a novel nonlocal self-similarity-based weighted tensor low-rank decomposition (NSWTLD) model that can achieve global optimization and local enhancement. In the proposed model, based on the corruption priors and the reconstruction priors, a pixel weighting strategy is given to characterize the joint effects of missing data, the Gaussian noise, and the impulse noise. To discover and utilize the accurate nonlocal self-similarity information to enhance the restoration quality of the details, the traditional nonlocal learning framework is optimized by employing improved index determination of patch group and handling strip noise caused by patch overlapping. In addition, an efficient and convergent algorithm is presented to solve the NSWTLD model. Comprehensive experiments are conducted on four types of multichannel images under various corruption scenarios. The results demonstrate the efficiency and effectiveness of the proposed model.
Collapse
|
24
|
Yang L, Miao J, Kou KI. Quaternion-based color image completion via logarithmic approximation. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Qin W, Wang H, Zhang F, Wang J, Luo X, Huang T. Low-Rank High-Order Tensor Completion With Applications in Visual Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2433-2448. [PMID: 35259105 DOI: 10.1109/tip.2022.3155949] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recently, tensor Singular Value Decomposition (t-SVD)-based low-rank tensor completion (LRTC) has achieved unprecedented success in addressing various pattern analysis issues. However, existing studies mostly focus on third-order tensors while order- d ( d ≥ 4 ) tensors are commonly encountered in real-world applications, like fourth-order color videos, fourth-order hyper-spectral videos, fifth-order light-field images, and sixth-order bidirectional texture functions. Aiming at addressing this critical issue, this paper establishes an order- d tensor recovery framework including the model, algorithm and theories by innovatively developing a novel algebraic foundation for order- d t-SVD, thereby achieving exact completion for any order- d low t-SVD rank tensors with missing values with an overwhelming probability. Emperical studies on synthetic data and real-world visual data illustrate that compared with other state-of-the-art recovery frameworks, the proposed one achieves highly competitive performance in terms of both qualitative and quantitative metrics. In particular, as the observed data density becomes low, i.e., about 10%, the proposed recovery framework is still significantly better than its peers. The code of our algorithm is released at https://github.com/Qinwenjinswu/TIP-Code.
Collapse
|
26
|
Li Z, Tang C, Zheng X, Liu X, Zhang W, Zhu E. High-Order Correlation Preserved Incomplete Multi-View Subspace Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2067-2080. [PMID: 35188891 DOI: 10.1109/tip.2022.3147046] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Incomplete multi-view clustering aims to exploit the information of multiple incomplete views to partition data into their clusters. Existing methods only utilize the pair-wise sample correlation and pair-wise view correlation to improve the clustering performance but neglect the high-order correlation of samples and that of views. To address this issue, we propose a high-order correlation preserved incomplete multi-view subspace clustering (HCP-IMSC) method which effectively recovers the missing views of samples and the subspace structure of incomplete multi-view data. Specifically, multiple affinity matrices constructed from the incomplete multi-view data are treated as a third-order low rank tensor with a tensor factorization regularization which preserves the high-order view correlation and sample correlation. Then, a unified affinity matrix can be obtained by fusing the view-specific affinity matrices in a self-weighted manner. A hypergraph is further constructed from the unified affinity matrix to preserve the high-order geometrical structure of the data with incomplete views. Then, the samples with missing views are restricted to be reconstructed by their neighbor samples under the hypergraph-induced hyper-Laplacian regularization. Furthermore, the learning of view-specific affinity matrices as well as the unified one, tensor factorization, and hyper-Laplacian regularization are integrated into a unified optimization framework. An iterative algorithm is designed to solve the resultant model. Experimental results on various benchmark datasets indicate the superiority of the proposed method. The code is implemented by using MATLAB R2018a and MindSpore library: https://github.com/ChangTang/HCP-IMSC.
Collapse
|
27
|
Miao J, Kou KI. Color Image Recovery Using Low-Rank Quaternion Matrix Completion Algorithm. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 31:190-201. [PMID: 34807825 DOI: 10.1109/tip.2021.3128321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
As a new color image representation tool, quaternion has achieved excellent results in color image processing problems. In this paper, we propose a novel low-rank quaternion matrix completion algorithm to recover missing data of a color image. Motivated by two kinds of low-rank approximation approaches (low-rank decomposition and nuclear norm minimization) in traditional matrix-based methods, we combine the two approaches in our quaternion matrix-based model. Furthermore, the nuclear norm of the quaternion matrix is replaced by the sum of the Frobenius norm of its two low-rank factor quaternion matrices. Based on the relationship between the quaternion matrix and its equivalent complex matrix, the problem eventually is converted from the quaternion number domain to the complex number domain. An alternating minimization method is applied to solve the model. Simulation results on color image recovery show the superior performance and efficiency of the proposed algorithm over some tensor-based and quaternion-based ones.
Collapse
|
28
|
Zhang T, Zhao J, Sun Q, Zhang B, Chen J, Gong M. Low-rank tensor completion via combined Tucker and Tensor Train for color image recovery. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02833-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
29
|
Unifying tensor factorization and tensor nuclear norm approaches for low-rank tensor completion. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
30
|
|
31
|
|
32
|
Zhou P, Lu C, Feng J, Lin Z, Yan S. Tensor Low-Rank Representation for Data Recovery and Clustering. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:1718-1732. [PMID: 31751228 DOI: 10.1109/tpami.2019.2954874] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-way or tensor data analysis has attracted increasing attention recently, with many important applications in practice. This article develops a tensor low-rank representation (TLRR) method, which is the first approach that can exactly recover the clean data of intrinsic low-rank structure and accurately cluster them as well, with provable performance guarantees. In particular, for tensor data with arbitrary sparse corruptions, TLRR can exactly recover the clean data under mild conditions; meanwhile TLRR can exactly verify their true origin tensor subspaces and hence cluster them accurately. TLRR objective function can be optimized via efficient convex programing with convergence guarantees. Besides, we provide two simple yet effective dictionary construction methods, the simple TLRR (S-TLRR) and robust TLRR (R-TLRR), to handle slightly and severely corrupted data respectively. Experimental results on two computer vision data analysis tasks, image/video recovery and face clustering, clearly demonstrate the superior performance, efficiency and robustness of our developed method over state-of-the-arts including the popular LRR and SSC methods.
Collapse
|
33
|
Wang JL, Huang TZ, Zhao XL, Jiang TX, Ng MK. Multi-Dimensional Visual Data Completion via Low-Rank Tensor Representation Under Coupled Transform. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3581-3596. [PMID: 33684037 DOI: 10.1109/tip.2021.3062995] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper addresses the tensor completion problem, which aims to recover missing information of multi-dimensional images. How to represent a low-rank structure embedded in the underlying data is the key issue in tensor completion. In this work, we suggest a novel low-rank tensor representation based on coupled transform, which fully exploits the spatial multi-scale nature and redundancy in spatial and spectral/temporal dimensions, leading to a better low tensor multi-rank approximation. More precisely, this representation is achieved by using two-dimensional framelet transform for the two spatial dimensions, one/two-dimensional Fourier transform for the temporal/spectral dimension, and then Karhunen-Loéve transform (via singular value decomposition) for the transformed tensor. Based on this low-rank tensor representation, we formulate a novel low-rank tensor completion model for recovering missing information in multi-dimensional visual data, which leads to a convex optimization problem. To tackle the proposed model, we develop the alternating directional method of multipliers (ADMM) algorithm tailored for the structured optimization problem. Numerical examples on color images, multispectral images, and videos illustrate that the proposed method outperforms many state-of-the-art methods in qualitative and quantitative aspects.
Collapse
|
34
|
Long Z, Zhu C, Liu J, Liu Y. Bayesian Low Rank Tensor Ring for Image Recovery. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3568-3580. [PMID: 33656994 DOI: 10.1109/tip.2021.3062195] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low rank tensor ring based data recovery can recover missing image entries in signal acquisition and transformation. The recently proposed tensor ring (TR) based completion algorithms generally solve the low rank optimization problem by alternating least squares method with predefined ranks, which may easily lead to overfitting when the unknown ranks are set too large and only a few measurements are available. In this article, we present a Bayesian low rank tensor ring completion method for image recovery by automatically learning the low-rank structure of data. A multiplicative interaction model is developed for low rank tensor ring approximation, where sparsity-inducing hierarchical prior is placed over horizontal and frontal slices of core factors. Compared with most of the existing methods, the proposed one is free of parameter-tuning, and the TR ranks can be obtained by Bayesian inference. Numerical experiments, including synthetic data, real-world color images and YaleFace dataset, show that the proposed method outperforms state-of-the-art ones, especially in terms of recovery accuracy.
Collapse
|
35
|
Chen L, Jiang X, Liu X, Zhou Z. Logarithmic Norm Regularized Low-Rank Factorization for Matrix and Tensor Completion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3434-3449. [PMID: 33651693 DOI: 10.1109/tip.2021.3061908] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Matrix and tensor completion aim to recover the incomplete two- and higher-dimensional observations using the low-rank property. Conventional techniques usually minimize the convex surrogate of rank (such as the nuclear norm), which, however, leads to the suboptimal solution for the low-rank recovery. In this paper, we propose a new definition of matrix/tensor logarithmic norm to induce a sparsity-driven surrogate for rank. More importantly, the factor matrix/tensor norm surrogate theorems are derived, which are capable of factoring the norm of large-scale matrix/tensor into those of small-scale matrices/tensors equivalently. Based upon surrogate theorems, we propose two new algorithms called Logarithmic norm Regularized Matrix Factorization (LRMF) and Logarithmic norm Regularized Tensor Factorization (LRTF). These two algorithms incorporate the logarithmic norm regularization with the matrix/tensor factorization and hence achieve more accurate low-rank approximation and high computational efficiency. The resulting optimization problems are solved using the framework of alternating minimization with the proof of convergence. Simulation results on both synthetic and real-world data demonstrate the superior performance of the proposed LRMF and LRTF algorithms over the state-of-the-art algorithms in terms of accuracy and efficiency.
Collapse
|
36
|
Xiao X, Chen Y, Gong YJ, Zhou Y. Prior Knowledge Regularized Multiview Self-Representation and its Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1325-1338. [PMID: 32310792 DOI: 10.1109/tnnls.2020.2984625] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To learn the self-representation matrices/tensor that encodes the intrinsic structure of the data, existing multiview self-representation models consider only the multiview features and, thus, impose equal membership preference across samples. However, this is inappropriate in real scenarios since the prior knowledge, e.g., explicit labels, semantic similarities, and weak-domain cues, can provide useful insights into the underlying relationship of samples. Based on this observation, this article proposes a prior knowledge regularized multiview self-representation (P-MVSR) model, in which the prior knowledge, multiview features, and high-order cross-view correlation are jointly considered to obtain an accurate self-representation tensor. The general concept of "prior knowledge" is defined as the complement of multiview features, and the core of P-MVSR is to take advantage of the membership preference, which is derived from the prior knowledge, to purify and refine the discovered membership of the data. Moreover, P-MVSR adopts the same optimization procedure to handle different prior knowledge and, thus, provides a unified framework for weakly supervised clustering and semisupervised classification. Extensive experiments on real-world databases demonstrate the effectiveness of the proposed P-MVSR model.
Collapse
|
37
|
Zhou P, Yuan XT, Yan S, Feng J. Faster First-Order Methods for Stochastic Non-Convex Optimization on Riemannian Manifolds. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:459-472. [PMID: 31398110 DOI: 10.1109/tpami.2019.2933841] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
First-order non-convex Riemannian optimization algorithms have gained recent popularity in structured machine learning problems including principal component analysis and low-rank matrix completion. The current paper presents an efficient Riemannian Stochastic Path Integrated Differential EstimatoR (R-SPIDER) algorithm to solve the finite-sum and online Riemannian non-convex minimization problems. At the core of R-SPIDER is a recursive semi-stochastic gradient estimator that can accurately estimate Riemannian gradient under not only exponential mapping and parallel transport, but also general retraction and vector transport operations. Compared with prior Riemannian algorithms, such a recursive gradient estimation mechanism endows R-SPIDER with lower computational cost in first-order oracle complexity. Specifically, for finite-sum problems with n components, R-SPIDER is proved to converge to an ϵ-approximate stationary point within [Formula: see text] stochastic gradient evaluations, beating the best-known complexity [Formula: see text]; for online optimization, R-SPIDER is shown to converge with [Formula: see text] complexity which is, to the best of our knowledge, the first non-asymptotic result for online Riemannian optimization. For the special case of gradient dominated functions, we further develop a variant of R-SPIDER with improved linear rate of convergence. Extensive experimental results demonstrate the advantage of the proposed algorithms over the state-of-the-art Riemannian non-convex optimization methods.
Collapse
|
38
|
Pu W, Wang X, Wu J, Huang Y, Yang J. Video SAR Imaging Based on Low-Rank Tensor Recovery. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:188-202. [PMID: 32175878 DOI: 10.1109/tnnls.2020.2978017] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Due to its ability of forming continuous images for a ground scene of interest, the video synthetic aperture radar (SAR) has been studied in recent years. However, as video SAR needs to reconstruct many frames, the data are of enormous amount and the imaging process is of large computational cost, which limits its applications. In this article, we exploit the redundancy property of multiframe video SAR data, which can be modeled as low-rank tensor, and formulate the video SAR imaging process as a low-rank tensor recovery problem, which is solved by an efficient alternating minimization method. We empirically compare the proposed method with several state-of-the-art video SAR imaging algorithms, including the fast back-projection (FBP) method and the compressed sensing (CS)-based method. Experiments on both simulated and real data show that the proposed low-rank tensor-based method requires significantly less amount of data samples while achieving similar or better imaging performance.
Collapse
|
39
|
Zhou Y, Cheung YM. Bayesian Low-Tubal-Rank Robust Tensor Factorization with Multi-Rank Determination. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:62-76. [PMID: 31226066 DOI: 10.1109/tpami.2019.2923240] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Robust tensor factorization is a fundamental problem in machine learning and computer vision, which aims at decomposing tensors into low-rank and sparse components. However, existing methods either suffer from limited modeling power in preserving low-rank structures, or have difficulties in determining the target tensor rank and the trade-off between the low-rank and sparse components. To address these problems, we propose a fully Bayesian treatment of robust tensor factorization along with a generalized sparsity-inducing prior. By adapting the recently proposed low-tubal-rank model in a generative manner, our method is effective in preserving low-rank structures. Moreover, benefiting from the proposed prior and the Bayesian framework, the proposed method can automatically determine the tensor rank while inferring the trade-off between the low-rank and sparse components. For model estimation, we develop a variational inference algorithm, and further improve its efficiency by reformulating the variational updates in the frequency domain. Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method in multi-rank determination as well as its superiority in image denoising and background modeling over state-of-the-art approaches.
Collapse
|
40
|
Chen L, Jiang X, Liu X, Zhou Z. Robust Low-Rank Tensor Recovery via Nonconvex Singular Value Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9044-9059. [PMID: 32946392 DOI: 10.1109/tip.2020.3023798] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Tensor robust principal component analysis via tensor nuclear norm (TNN) minimization has been recently proposed to recover the low-rank tensor corrupted with sparse noise/outliers. TNN is demonstrated to be a convex surrogate of rank. However, it tends to over-penalize large singular values and thus usually results in biased solutions. To handle this issue, we propose a new definition of tensor logarithmic norm (TLN) as the nonconvex surrogate of rank, which can decrease the penalization on larger singular values and increase that on smaller ones simultaneously to preserve the low-rank structure of a tensor. Then, the strategy of tensor factorization is combined into the minimization of TLN to improve computational performance. To handle impulsive scenarios, we propose a nonconvex 'p-ball projection scheme with 0 < p < 1 instead of the conventional convex scheme with p = 1, which enhances the robustness against outliers. By incorporating the TLN minimization and the 'p-ball projection, we finally propose two low-rank recovery algorithms, whose resulting optimization problems are efficiently solved by the alternating direction method of multipliers (ADMM) with convergence guarantees. The proposed algorithms are applied to the synthetic data recovery and image and video restorations in real-world. Experimental results demonstrate the superior performance of the proposed methods over several state-ofthe- art algorithms in terms of tensor recovery accuracy and computational efficiency.
Collapse
|
41
|
|
42
|
Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation. REMOTE SENSING 2020. [DOI: 10.3390/rs12142264] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.
Collapse
|
43
|
Yin K, Afshar A, Ho JC, Cheung WK, Zhang C, Sun J. LogPar: Logistic PARAFAC2 Factorization for Temporal Binary Data with Missing Values. KDD : PROCEEDINGS. INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING 2020; 2020:1625-1635. [PMID: 34109054 DOI: 10.1145/3394486.3403213] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Binary data with one-class missing values are ubiquitous in real-world applications. They can be represented by irregular tensors with varying sizes in one dimension, where value one means presence of a feature while zero means unknown (i.e., either presence or absence of a feature). Learning accurate low-rank approximations from such binary irregular tensors is a challenging task. However, none of the existing models developed for factorizing irregular tensors take the missing values into account, and they assume Gaussian distributions, resulting in a distribution mismatch when applied to binary data. In this paper, we propose Logistic PARAFAC2 (LogPar) by modeling the binary irregular tensor with Bernoulli distribution parameterized by an underlying real-valued tensor. Then we approximate the underlying tensor with a positive-unlabeled learning loss function to account for the missing values. We also incorporate uniqueness and temporal smoothness regularization to enhance the interpretability. Extensive experiments using large-scale real-world datasets show that LogPar outperforms all baselines in both irregular tensor completion and downstream predictive tasks. For the irregular tensor completion, LogPar achieves up to 26% relative improvement compared to the best baseline. Besides, LogPar obtains relative improvement of 13.2% for heart failure prediction and 14% for mortality prediction on average compared to the state-of-the-art PARAFAC2 models.
Collapse
Affiliation(s)
| | | | | | | | | | - Jimeng Sun
- University of Illinois, Urbana-Champaign
| |
Collapse
|
44
|
Jiang M, Shen Q, Li Y, Yang X, Zhang J, Wang Y, Xia L. Improved robust tensor principal component analysis for accelerating dynamic MR imaging reconstruction. Med Biol Eng Comput 2020; 58:1483-1498. [DOI: 10.1007/s11517-020-02161-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 03/12/2020] [Indexed: 11/30/2022]
|
45
|
Lu GF, Yu QR, Wang Y, Tang G. Hyper-Laplacian regularized multi-view subspace clustering with low-rank tensor constraint. Neural Netw 2020; 125:214-223. [PMID: 32146353 DOI: 10.1016/j.neunet.2020.02.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2019] [Revised: 02/02/2020] [Accepted: 02/21/2020] [Indexed: 11/29/2022]
Abstract
In this paper, we propose a novel hyper-Laplacian regularized multiview subspace clustering with low-rank tensor constraint method, which is referred as HLR-MSCLRT. In the HLR-MSCLRT model, the subspace representation matrices of different views are stacked as a tensor, and then the high order correlations among data can be captured. To reduce the redundancy information of the learned subspace representations, a low-rank constraint is adopted to the constructed tensor. Since data in the real world often reside in multiple nonlinear subspaces, the HLR-MSCLRT model utilizes the hyper-Laplacian graph regularization to preserve the local geometry structure embedded in a high-dimensional ambient space. An efficient algorithm is also presented to solve the optimization problem of the HLR-MSCLRT model. The experimental results on some data sets show that the proposed HLR-MSCLRT model outperforms many state-of-the-art multi-view clustering approaches.
Collapse
Affiliation(s)
- Gui-Fu Lu
- School of Computer Science and Information, AnHui Polytechnic University, WuHu, AnHui 241000, China.
| | - Qin-Ru Yu
- School of Computer Science and Information, AnHui Polytechnic University, WuHu, AnHui 241000, China
| | - Yong Wang
- School of Computer Science and Information, AnHui Polytechnic University, WuHu, AnHui 241000, China
| | - Ganyi Tang
- School of Computer Science and Information, AnHui Polytechnic University, WuHu, AnHui 241000, China
| |
Collapse
|
46
|
|
47
|
Lu L, Ren X, Yeh KH, Tan Z, Chanussot J. Exploring coupled images fusion based on joint tensor decomposition. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00215-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Data fusion has always been a hot research topic in human-centric computing and extended with the development of artificial intelligence. Generally, the coupled data fusion algorithm usually utilizes the information from one data set to improve the estimation accuracy and explain related latent variables of other coupled datasets. This paper proposes several kinds of coupled images decomposition algorithms based on the coupled matrix and tensor factorization-optimization (CMTF-OPT) algorithm and the flexible coupling algorithm, which are termed the coupled images factorization-optimization (CIF-OPT) algorithm and the modified flexible coupling algorithm respectively. The theory and experiments show that the effect of the CIF-OPT algorithm is robust under the influence of different noises. Particularly, the CIF-OPT algorithm can accurately restore an image with missing some data elements. Moreover, the flexible coupling model has better estimation performance than a hard coupling. For high-dimensional images, this paper adopts the compressed data decomposition algorithm that not only works better than uncoupled ALS algorithm as the image noise level increases, but saves time and cost compared to the uncompressed algorithm.
Collapse
|
48
|
Mu Y, Wang P, Lu L, Zhang X, Qi L. Weighted tensor nuclear norm minimization for tensor completion using tensor-SVD. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2018.12.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
49
|
Image Completion with Hybrid Interpolation in Tensor Representation. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10030797] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The issue of image completion has been developed considerably over the last two decades, and many computational strategies have been proposed to fill-in missing regions in an incomplete image. When the incomplete image contains many small-sized irregular missing areas, a good alternative seems to be the matrix or tensor decomposition algorithms that yield low-rank approximations. However, this approach uses heuristic rank adaptation techniques, especially for images with many details. To tackle the obstacles of low-rank completion methods, we propose to model the incomplete images with overlapping blocks of Tucker decomposition representations where the factor matrices are determined by a hybrid version of the Gaussian radial basis function and polynomial interpolation. The experiments, carried out for various image completion and resolution up-scaling problems, demonstrate that our approach considerably outperforms the baseline and state-of-the-art low-rank completion methods.
Collapse
|
50
|
Stable Tensor Principal Component Pursuit: Error Bounds and Efficient Algorithms. SENSORS 2019; 19:s19235335. [PMID: 31817050 PMCID: PMC6928658 DOI: 10.3390/s19235335] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 11/28/2019] [Accepted: 11/29/2019] [Indexed: 11/19/2022]
Abstract
The rapid development of sensor technology gives rise to the emergence of huge amounts of tensor (i.e., multi-dimensional array) data. For various reasons such as sensor failures and communication loss, the tensor data may be corrupted by not only small noises but also gross corruptions. This paper studies the Stable Tensor Principal Component Pursuit (STPCP) which aims to recover a tensor from its corrupted observations. Specifically, we propose a STPCP model based on the recently proposed tubal nuclear norm (TNN) which has shown superior performance in comparison with other tensor nuclear norms. Theoretically, we rigorously prove that under tensor incoherence conditions, the underlying tensor and the sparse corruption tensor can be stably recovered. Algorithmically, we first develop an ADMM algorithm and then accelerate it by designing a new algorithm based on orthogonal tensor factorization. The superiority and efficiency of the proposed algorithms is demonstrated through experiments on both synthetic and real data sets.
Collapse
|