1
|
Ramos LP, Vu VT, Pettersson MI, Dammert P, Duarte LT, Machado R. Performance Assessment of Change Detection Based on Robust PCA for Wavelength Resolution SAR Images Using Nonidentical Flight Passes. SENSORS (BASEL, SWITZERLAND) 2025; 25:2506. [PMID: 40285198 PMCID: PMC12031429 DOI: 10.3390/s25082506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2025] [Revised: 04/12/2025] [Accepted: 04/14/2025] [Indexed: 04/29/2025]
Abstract
One of the main challenges in Synthetic Aperture Radar (SAR) change detection involves using SAR images from different flight passes. Depending on the flight pass, objects have different specular reflections since the radar cross-sections of these objects can be totally different between passes. Then, it is common knowledge that the flight passes must be close to identical for conventional SAR change detection. Wavelength-resolution SAR refers to a SAR system with a spatial resolution approximately equal to the wavelength. This high relative resolution helps to stabilize the ground clutter in the SAR images. Consequently, the restricted requirement about identical flight passes for SAR change detection can be relaxed, and SAR change detection becomes possible with nonidentical passes. This paper shows that robust principal component analysis (RPCA) is efficient for change detection even using wavelength-resolution SAR images acquired with very different flight passes. It presents several SAR change detection experimental results using flight pass differences up to 95°. For slightly different passes, e.g., 5°, our method reached a false alarm rate (FAR) of approximately one false alarm per square kilometer for a probability of detection (PD) above 90%. In a particular setting, it achieves a PD of 97.5% for a FAR of 0.917 false alarms per square kilometer, even using SAR images acquired with nonidentical passes.
Collapse
Affiliation(s)
- Lucas P. Ramos
- Department of Telecommunications, Aeronautics Institute of Technology, São José dos Campos 12228-900, Brazil;
| | - Viet T. Vu
- Department of Mathematics and Natural Sciences, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden; (V.T.V.); (M.I.P.)
| | - Mats I. Pettersson
- Department of Mathematics and Natural Sciences, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden; (V.T.V.); (M.I.P.)
| | | | - Leonardo T. Duarte
- School of Applied Sciences, State University of Campinas, Limeira 13484-350, Brazil;
| | - Renato Machado
- Department of Telecommunications, Aeronautics Institute of Technology, São José dos Campos 12228-900, Brazil;
| |
Collapse
|
2
|
Lin CH, Liu Y, Chi CY, Hsu CC, Ren H, Quek TQS. Hyperspectral Tensor Completion Using Low-Rank Modeling and Convex Functional Analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10736-10750. [PMID: 37027554 DOI: 10.1109/tnnls.2023.3243808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Hyperspectral tensor completion (HTC) for remote sensing, critical for advancing space exploration and other satellite imaging technologies, has drawn considerable attention from recent machine learning community. Hyperspectral image (HSI) contains a wide range of narrowly spaced spectral bands hence forming unique electrical magnetic signatures for distinct materials, and thus plays an irreplaceable role in remote material identification. Nevertheless, remotely acquired HSIs are of low data purity and quite often incompletely observed or corrupted during transmission. Therefore, completing the 3-D hyperspectral tensor, involving two spatial dimensions and one spectral dimension, is a crucial signal processing task for facilitating the subsequent applications. Benchmark HTC methods rely on either supervised learning or nonconvex optimization. As reported in recent machine learning literature, John ellipsoid (JE) in functional analysis is a fundamental topology for effective hyperspectral analysis. We therefore attempt to adopt this key topology in this work, but this induces a dilemma that the computation of JE requires the complete information of the entire HSI tensor that is, however, unavailable under the HTC problem setting. We resolve the dilemma, decouple HTC into convex subproblems ensuring computational efficiency, and show state-of-the-art HTC performances of our algorithm. We also demonstrate that our method has improved the subsequent land cover classification accuracy on the recovered hyperspectral tensor.
Collapse
|
3
|
Zha Z, Wen B, Yuan X, Zhou J, Zhu C, Kot AC. Low-Rankness Guided Group Sparse Representation for Image Restoration. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7593-7607. [PMID: 35130172 DOI: 10.1109/tnnls.2022.3144630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
As a spotlighted nonlocal image representation model, group sparse representation (GSR) has demonstrated a great potential in diverse image restoration tasks. Most of the existing GSR-based image restoration approaches exploit the nonlocal self-similarity (NSS) prior by clustering similar patches into groups and imposing sparsity to each group coefficient, which can effectively preserve image texture information. However, these methods have imposed only plain sparsity over each individual patch of the group, while neglecting other beneficial image properties, e.g., low-rankness (LR), leads to degraded image restoration results. In this article, we propose a novel low-rankness guided group sparse representation (LGSR) model for highly effective image restoration applications. The proposed LGSR jointly utilizes the sparsity and LR priors of each group of similar patches under a unified framework. The two priors serve as the complementary priors in LGSR for effectively preserving the texture and structure information of natural images. Moreover, we apply an alternating minimization algorithm with an adaptively adjusted parameter scheme to solve the proposed LGSR-based image restoration problem. Extensive experiments are conducted to demonstrate that the proposed LGSR achieves superior results compared with many popular or state-of-the-art algorithms in various image restoration tasks, including denoising, inpainting, and compressive sensing (CS).
Collapse
|
4
|
Convolutional Neural Network Knowledge Graph Link Prediction Model Based on Relational Memory. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023. [DOI: 10.1155/2023/3909697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
A knowledge graph is a collection of fact triples, a semantic network composed of nodes and edges. Link prediction from knowledge graphs is used to reason about missing parts of triples. Common knowledge graph link prediction models include translation models, semantics matching models, and neural network models. However, the translation models and semantic matching models have relatively simple structures and poor expressiveness. The neural network model can easily ignore the overall structural characteristics of triples and cannot capture the links between entities and relations in low-dimensional space. In response to the above problems, we propose a knowledge graph embedding model based on a relational memory network and convolutional neural network (RMCNN). We encode triple embedding vectors using a relational memory network and decode using a convolutional neural network. First, we will obtain entity and relation vectors by encoding the latent dependencies between entities and relations and some critical information and keeping the translation properties of triples. Then, we compose a matrix of head entity encoding embedding vector, relation encoding embedding vector, and tail entity embedding encoding vector as the input of the convolutional neural network. Finally, we use a convolutional neural network as the decoder and a dimension conversion strategy to improve the information interaction capability of entities and relations in more dimensions. Experiments show that our model achieves significant progress and outperforms existing models and methods on several metrics.
Collapse
|
5
|
Lu Y, Wang W, Zeng B, Lai Z, Shen L, Li X. Canonical Correlation Analysis With Low-Rank Learning for Image Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7048-7062. [PMID: 36346858 DOI: 10.1109/tip.2022.3219235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As a multivariate data analysis tool, canonical correlation analysis (CCA) has been widely used in computer vision and pattern recognition. However, CCA uses Euclidean distance as a metric, which is sensitive to noise or outliers in the data. Furthermore, CCA demands that the two training sets must have the same number of training samples, which limits the performance of CCA-based methods. To overcome these limitations of CCA, two novel canonical correlation learning methods based on low-rank learning are proposed in this paper for image representation, named robust canonical correlation analysis (robust-CCA) and low-rank representation canonical correlation analysis (LRR-CCA). By introducing two regular matrices, the training sample numbers of the two training datasets can be set as any values without any limitation in the two proposed methods. Specifically, robust-CCA uses low-rank learning to remove the noise in the data and extracts the maximization correlation features from the two learned clean data matrices. The nuclear norm and L1 -norm are used as constraints for the learned clean matrices and noise matrices, respectively. LRR-CCA introduces low-rank representation into CCA to ensure that the correlative features can be obtained in low-rank representation. To verify the performance of the proposed methods, five publicly image databases are used to conduct extensive experiments. The experimental results demonstrate the proposed methods outperform state-of-the-art CCA-based and low-rank learning methods.
Collapse
|
6
|
Wang S, Chen Z, Du S, Lin Z. Learning Deep Sparse Regularizers With Applications to Multi-View Clustering and Semi-Supervised Classification. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5042-5055. [PMID: 34018930 DOI: 10.1109/tpami.2021.3082632] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Sparsity-constrained optimization problems are common in machine learning, such as sparse coding, low-rank minimization and compressive sensing. However, most of previous studies focused on constructing various hand-crafted sparse regularizers, while little work was devoted to learning adaptive sparse regularizers from given input data for specific tasks. In this paper, we propose a deep sparse regularizer learning model that learns data-driven sparse regularizers adaptively. Via the proximal gradient algorithm, we find that the sparse regularizer learning is equivalent to learning a parameterized activation function. This encourages us to learn sparse regularizers in the deep learning framework. Therefore, we build a neural network composed of multiple blocks, each being differentiable and reusable. All blocks contain learnable piecewise linear activation functions which correspond to the sparse regularizer to be learned. Furthermore, the proposed model is trained with back propagation, and all parameters in this model are learned end-to-end. We apply our framework to multi-view clustering and semi-supervised classification tasks to learn a latent compact representation. Experimental results demonstrate the superiority of the proposed framework over state-of-the-art multi-view learning models.
Collapse
|
7
|
Wang Z, Liu Y, Luo X, Wang J, Gao C, Peng D, Chen W. Large-Scale Affine Matrix Rank Minimization With a Novel Nonconvex Regularizer. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4661-4675. [PMID: 33646960 DOI: 10.1109/tnnls.2021.3059711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-rank minimization aims to recover a matrix of minimum rank subject to linear system constraint. It can be found in various data analysis and machine learning areas, such as recommender systems, video denoising, and signal processing. Nuclear norm minimization is a dominating approach to handle it. However, such a method ignores the difference among singular values of target matrix. To address this issue, nonconvex low-rank regularizers have been widely used. Unfortunately, existing methods suffer from different drawbacks, such as inefficiency and inaccuracy. To alleviate such problems, this article proposes a flexible model with a novel nonconvex regularizer. Such a model not only promotes low rankness but also can be solved much faster and more accurate. With it, the original low-rank problem can be equivalently transformed into the resulting optimization problem under the rank restricted isometry property (rank-RIP) condition. Subsequently, Nesterov's rule and inexact proximal strategies are adopted to achieve a novel algorithm highly efficient in solving this problem at a convergence rate of O(1/K) , with K being the iterate count. Besides, the asymptotic convergence rate is also analyzed rigorously by adopting the Kurdyka- ojasiewicz (KL) inequality. Furthermore, we apply the proposed optimization model to typical low-rank problems, including matrix completion, robust principal component analysis (RPCA), and tensor completion. Exhaustively empirical studies regarding data analysis tasks, i.e., synthetic data analysis, image recovery, personalized recommendation, and background subtraction, indicate that the proposed model outperforms state-of-the-art models in both accuracy and efficiency.
Collapse
|
8
|
Zhang H, Qian F, Shang F, Du W, Qian J, Yang J. Global Convergence Guarantees of (A)GIST for a Family of Nonconvex Sparse Learning Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3276-3288. [PMID: 32784147 DOI: 10.1109/tcyb.2020.3010960] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, most of the studies have shown that the generalized iterated shrinkage thresholdings (GISTs) have become the commonly used first-order optimization algorithms in sparse learning problems. The nonconvex relaxations of the l0 -norm usually achieve better performance than the convex case (e.g., l1 -norm) since the former can achieve a nearly unbiased solver. To increase the calculation efficiency, this work further provides an accelerated GIST version, that is, AGIST, through the extrapolation-based acceleration technique, which can contribute to reduce the number of iterations when solving a family of nonconvex sparse learning problems. Besides, we present the algorithmic analysis, including both local and global convergence guarantees, as well as other intermediate results for the GIST and AGIST, denoted as (A)GIST, by virtue of the Kurdyka-Łojasiewica (KŁ) property and some milder assumptions. Numerical experiments on both synthetic data and real-world databases can demonstrate that the convergence results of objective function accord to the theoretical properties and nonconvex sparse learning methods can achieve superior performance over some convex ones.
Collapse
|
9
|
|
10
|
Iterative rank-one matrix completion via singular value decomposition and nuclear norm regularization. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.07.035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
11
|
Sang X, Lu H, Zhao Q, Zhang F, Lu J. Nonconvex regularizer and latent pattern based robust regression for face recognition. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.08.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Qin A, Xian L, Yang Y, Zhang T, Tang YY. Low-Rank Matrix Recovery from Noise via an MDL Framework-Based Atomic Norm. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6111. [PMID: 33121059 PMCID: PMC7663647 DOI: 10.3390/s20216111] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/16/2020] [Accepted: 10/22/2020] [Indexed: 06/11/2023]
Abstract
The recovery of the underlying low-rank structure of clean data corrupted with sparse noise/outliers is attracting increasing interest. However, in many low-level vision problems, the exact target rank of the underlying structure and the particular locations and values of the sparse outliers are not known. Thus, the conventional methods cannot separate the low-rank and sparse components completely, especially in the case of gross outliers or deficient observations. Therefore, in this study, we employ the minimum description length (MDL) principle and atomic norm for low-rank matrix recovery to overcome these limitations. First, we employ the atomic norm to find all the candidate atoms of low-rank and sparse terms, and then we minimize the description length of the model in order to select the appropriate atoms of low-rank and the sparse matrices, respectively. Our experimental analyses show that the proposed approach can obtain a higher success rate than the state-of-the-art methods, even when the number of observations is limited or the corruption ratio is high. Experimental results utilizing synthetic data and real sensing applications (high dynamic range imaging, background modeling, removing noise and shadows) demonstrate the effectiveness, robustness and efficiency of the proposed method.
Collapse
Affiliation(s)
- Anyong Qin
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Lina Xian
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (L.X.); (Y.Y.)
| | - Yongliang Yang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (L.X.); (Y.Y.)
| | - Taiping Zhang
- College of Computer Science, Chongqing University, Chongqing 400030, China;
| | - Yuan Yan Tang
- Faculty of Science and Technology, University of Macau, Macau 999078, China;
| |
Collapse
|