1
|
Wang Z, Hu D, Liu Z, Gao C, Wang Z. Iteratively Capped Reweighting Norm Minimization with Global Convergence Guarantee for Low-Rank Matrix Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; PP:1923-1940. [PMID: 40030450 DOI: 10.1109/tpami.2024.3512458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
In recent years, a large number of studies have shown that low rank matrix learning (LRML) has become a popular approach in machine learning and computer vision with many important applications, such as image inpainting, subspace clustering, and recommendation system. The latest LRML methods resort to using some surrogate functions as convex or nonconvex relaxation of the rank function. However, most of these methods ignore the difference between different rank components and can only yield suboptimal solutions. To alleviate this problem, in this paper we propose a novel nonconvex regularizer called capped reweighting norm minimization (CRNM), which not only considers the different contributions of different rank components, but also adaptively truncates sequential singular values. With it, a general LRML model is obtained. Meanwhile, under some mild conditions, the global optimum of CRNM regularized least squares subproblem can be easily obtained in closed-form. Through the analysis of the theoretical properties of CRNM, we develop a high computational efficiency optimization method with convergence guarantee to solve the general LRML model. More importantly, by using the Kurdyka-Łojasiewicz (KŁ) inequality, its local and global convergence properties are established. Finally, we show that the proposed nonconvex regularizer as well as the optimization approach are suitable for different low rank tasks, such as matrix completion and subspace clustering. Extensive experimental results demonstrate that the constructed models and methods provide significant advantages over several state-of-the-art low rank matrix leaning models and methods.
Collapse
|
2
|
Li X, Zhang H, Zhang R. Matrix Completion via Non-Convex Relaxation and Adaptive Correlation Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:1981-1991. [PMID: 35254976 DOI: 10.1109/tpami.2022.3157083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The existing matrix completion methods focus on optimizing the relaxation of rank function such as nuclear norm, Schatten- p norm, etc. They usually need many iterations to converge. Moreover, only the low-rank property of matrices is utilized in most existing models and several methods that incorporate other knowledge are quite time-consuming in practice. To address these issues, we propose a novel non-convex surrogate that can be optimized by closed-form solutions, such that it empirically converges within dozens of iterations. Besides, the optimization is parameter-free and the convergence is proved. Compared with the relaxation of rank, the surrogate is motivated by optimizing an upper-bound of rank. We theoretically validate that it is equivalent to the existing matrix completion models. Besides the low-rank assumption, we intend to exploit the column-wise correlation for matrix completion, and thus an adaptive correlation learning, which is scaling-invariant, is developed. More importantly, after incorporating the correlation learning, the model can be still solved by closed-form solutions such that it still converges fast. Experiments show the effectiveness of the non-convex surrogate and adaptive correlation learning.
Collapse
|
3
|
Lu Y, Wang W, Zeng B, Lai Z, Shen L, Li X. Canonical Correlation Analysis With Low-Rank Learning for Image Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7048-7062. [PMID: 36346858 DOI: 10.1109/tip.2022.3219235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As a multivariate data analysis tool, canonical correlation analysis (CCA) has been widely used in computer vision and pattern recognition. However, CCA uses Euclidean distance as a metric, which is sensitive to noise or outliers in the data. Furthermore, CCA demands that the two training sets must have the same number of training samples, which limits the performance of CCA-based methods. To overcome these limitations of CCA, two novel canonical correlation learning methods based on low-rank learning are proposed in this paper for image representation, named robust canonical correlation analysis (robust-CCA) and low-rank representation canonical correlation analysis (LRR-CCA). By introducing two regular matrices, the training sample numbers of the two training datasets can be set as any values without any limitation in the two proposed methods. Specifically, robust-CCA uses low-rank learning to remove the noise in the data and extracts the maximization correlation features from the two learned clean data matrices. The nuclear norm and L1 -norm are used as constraints for the learned clean matrices and noise matrices, respectively. LRR-CCA introduces low-rank representation into CCA to ensure that the correlative features can be obtained in low-rank representation. To verify the performance of the proposed methods, five publicly image databases are used to conduct extensive experiments. The experimental results demonstrate the proposed methods outperform state-of-the-art CCA-based and low-rank learning methods.
Collapse
|
4
|
Ma X, Li Z, Wang H. Deep Matrix Factorization Based on Convolutional Neural Networks for Image Inpainting. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1500. [PMID: 37420520 DOI: 10.3390/e24101500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/26/2022] [Accepted: 10/17/2022] [Indexed: 07/09/2023]
Abstract
In this work, we formulate the image in-painting as a matrix completion problem. Traditional matrix completion methods are generally based on linear models, assuming that the matrix is low rank. When the original matrix is large scale and the observed elements are few, they will easily lead to over-fitting and their performance will also decrease significantly. Recently, researchers have tried to apply deep learning and nonlinear techniques to solve matrix completion. However, most of the existing deep learning-based methods restore each column or row of the matrix independently, which loses the global structure information of the matrix and therefore does not achieve the expected results in the image in-painting. In this paper, we propose a deep matrix factorization completion network (DMFCNet) for image in-painting by combining deep learning and a traditional matrix completion model. The main idea of DMFCNet is to map iterative updates of variables from a traditional matrix completion model into a fixed depth neural network. The potential relationships between observed matrix data are learned in a trainable end-to-end manner, which leads to a high-performance and easy-to-deploy nonlinear solution. Experimental results show that DMFCNet can provide higher matrix completion accuracy than the state-of-the-art matrix completion methods in a shorter running time.
Collapse
Affiliation(s)
- Xiaoxuan Ma
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| | - Zhiwen Li
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| | - Hengyou Wang
- School of Science, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
| |
Collapse
|
5
|
Wang Z, Liu Y, Luo X, Wang J, Gao C, Peng D, Chen W. Large-Scale Affine Matrix Rank Minimization With a Novel Nonconvex Regularizer. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4661-4675. [PMID: 33646960 DOI: 10.1109/tnnls.2021.3059711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-rank minimization aims to recover a matrix of minimum rank subject to linear system constraint. It can be found in various data analysis and machine learning areas, such as recommender systems, video denoising, and signal processing. Nuclear norm minimization is a dominating approach to handle it. However, such a method ignores the difference among singular values of target matrix. To address this issue, nonconvex low-rank regularizers have been widely used. Unfortunately, existing methods suffer from different drawbacks, such as inefficiency and inaccuracy. To alleviate such problems, this article proposes a flexible model with a novel nonconvex regularizer. Such a model not only promotes low rankness but also can be solved much faster and more accurate. With it, the original low-rank problem can be equivalently transformed into the resulting optimization problem under the rank restricted isometry property (rank-RIP) condition. Subsequently, Nesterov's rule and inexact proximal strategies are adopted to achieve a novel algorithm highly efficient in solving this problem at a convergence rate of O(1/K) , with K being the iterate count. Besides, the asymptotic convergence rate is also analyzed rigorously by adopting the Kurdyka- ojasiewicz (KL) inequality. Furthermore, we apply the proposed optimization model to typical low-rank problems, including matrix completion, robust principal component analysis (RPCA), and tensor completion. Exhaustively empirical studies regarding data analysis tasks, i.e., synthetic data analysis, image recovery, personalized recommendation, and background subtraction, indicate that the proposed model outperforms state-of-the-art models in both accuracy and efficiency.
Collapse
|
6
|
Zhang C, Li H, Chen C, Qian Y, Zhou X. Enhanced Group Sparse Regularized Nonconvex Regression for Face Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:2438-2452. [PMID: 33108280 DOI: 10.1109/tpami.2020.3033994] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Regression analysis based methods have shown strong robustness and achieved great success in face recognition. In these methods, convex l1-norm and nuclear norm are usually utilized to approximate the l0-norm and rank function. However, such convex relaxations may introduce a bias and lead to a suboptimal solution. In this paper, we propose a novel Enhanced Group Sparse regularized Nonconvex Regression (EGSNR) method for robust face recognition. An upper bounded nonconvex function is introduced to replace l1-norm for sparsity, which alleviates the bias problem and adverse effects caused by outliers. To capture the characteristics of complex errors, we propose a mixed model by combining γ-norm and matrix γ-norm induced from the nonconvex function. Furthermore, an l2,γ-norm based regularizer is designed to directly seek the interclass sparsity or group sparsity instead of traditional l2,1-norm. The locality of data, i.e., the distance between the query sample and multi-subspaces, is also taken into consideration. This enhanced group sparse regularizer enables EGSNR to learn more discriminative representation coefficients. Comprehensive experiments on several popular face datasets demonstrate that the proposed EGSNR outperforms the state-of-the-art regression based methods for robust face recognition.
Collapse
|
7
|
Non-convex logarithm embedding subspace weighted graph approach to fault detection with missing measurements. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.12.065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
8
|
Zhang Y, Xu K, Liang S, Zhao C. Matrix Completion Based on Low-Rank and Local Features Applied to Images Recovery and Recommendation Systems. IEEE ACCESS 2022; 10:97010-97021. [DOI: 10.1109/access.2022.3204660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Ying Zhang
- The Second Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Kai Xu
- School of Software Engineering, South China University of Technology, Guangzhou, China
| | - Songfeng Liang
- School of Automotive and Transportation Engineering, Shenzhen Polytechnic, Shenzhen, China
| | - Chen Zhao
- The New Energy Automotive Technology Research Institute, Shenzhen Polytechnic, Shenzhen, China
| |
Collapse
|
9
|
Iterative rank-one matrix completion via singular value decomposition and nuclear norm regularization. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.07.035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
10
|
Sang X, Lu H, Zhao Q, Zhang F, Lu J. Nonconvex regularizer and latent pattern based robust regression for face recognition. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.08.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty. Symmetry (Basel) 2019. [DOI: 10.3390/sym11121512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.
Collapse
|
12
|
A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization. Symmetry (Basel) 2019. [DOI: 10.3390/sym11101277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The nuclear norm minimization (NNM) problem is to recover a matrix that minimizes the sum of its singular values and satisfies some linear constraints simultaneously. The alternating direction method (ADM) has been used to solve this problem recently. However, the subproblems in ADM are usually not easily solvable when the linear mappings in the constraints are not identities. In this paper, we propose a proximity algorithm with adaptive penalty (PA-AP). First, we formulate the nuclear norm minimization problems into a unified model. To solve this model, we improve the ADM by adding a proximal term to the subproblems that are difficult to solve. An adaptive tactic on the proximity parameters is also put forward for acceleration. By employing subdifferentials and proximity operators, an equivalent fixed-point equation system is constructed, and we use this system to further prove the convergence of the proposed algorithm under certain conditions, e.g., the precondition matrix is symmetric positive definite. Finally, experimental results and comparisons with state-of-the-art methods, e.g., ADM, IADM-CG and IADM-BB, show that the proposed algorithm is effective.
Collapse
|
13
|
Chen Y, Xiao X, Zhou Y. Low-rank quaternion approximation for color image processing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:1426-1439. [PMID: 31545725 DOI: 10.1109/tip.2019.2941319] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Low-rank matrix approximation (LRMA)-based methods have made a great success for grayscale image processing. When handling color images, LRMA either restores each color channel independently using the monochromatic model or processes the concatenation of three color channels using the concatenation model. However, these two schemes may not make full use of the high correlation among RGB channels. To address this issue, we propose a novel low-rank quaternion approximation (LRQA) model. It contains two major components: first, instead of modeling a color image pixel as a scalar in conventional sparse representation and LRMA-based methods, the color image is encoded as a pure quaternion matrix, such that the cross-channel correlation of color channels can be well exploited; second, LRQA imposes the low-rank constraint on the constructed quaternion matrix. To better estimate the singular values of the underlying low-rank quaternion matrix from its noisy observation, a general model for LRQA is proposed based on several nonconvex functions. Extensive evaluations for color image denoising and inpainting tasks verify that LRQA achieves better performance over several state-of-the-art sparse representation and LRMA-based methods in terms of both quantitative metrics and visual quality.
Collapse
|