1
|
Saffari M, Khodayar M, Khodayar ME, Shahidehpour M. Behind-the-Meter Load and PV Disaggregation via Deep Spatiotemporal Graph Generative Sparse Coding With Capsule Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14573-14587. [PMID: 37339026 DOI: 10.1109/tnnls.2023.3280078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
Nowadays, rooftop photovoltaic (PV) panels are getting enormous attention as clean and sustainable sources of energy due to the increasing energy demand, depreciating physical assets, and global environmental challenges. In residential areas, the large-scale integration of these generation resources influences the customer load profile and introduces uncertainty to the distribution system's net load. Since such resources are typically located behind the meter (BtM), an accurate estimation of BtM load and PV power will be crucial for distribution network operation. This article proposes the spatiotemporal graph sparse coding (SC) capsule network that incorporates SC into deep generative graph modeling and capsule networks for accurate BtM load and PV generation estimation. A set of neighboring residential units are modeled as a dynamic graph in which the edges represent the correlation among their net demands. A generative encoder-decoder model, i.e., spectral graph convolution (SGC) attention peephole long short-term memory (PLSTM), is devised to extract the highly nonlinear spatiotemporal patterns from the formed dynamic graph. Later, to enrich the latent space sparsity, a dictionary is learned in the hidden layer of the proposed encoder-decoder, and the corresponding sparse codes are procured. Such sparse representation is used by a capsule network to estimate the BtM PV generation and the load of the entire residential units. Experimental results on two real-world energy disaggregation (ED) datasets, Pecan Street and Ausgrid, demonstrate more than 9.8% and 6.3% root mean square error (RMSE) improvements in BtM PV and load estimation over the state-of-the-art, respectively.
Collapse
|
2
|
Chen Z, Wu XJ, Xu T, Kittler J. Discriminative Dictionary Pair Learning With Scale-Constrained Structured Representation for Image Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10225-10239. [PMID: 37015383 DOI: 10.1109/tnnls.2022.3165217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The dictionary pair learning (DPL) model aims to design a synthesis dictionary and an analysis dictionary to accomplish the goal of rapid sample encoding. In this article, we propose a novel structured representation learning algorithm based on the DPL for image classification. It is referred to as discriminative DPL with scale-constrained structured representation (DPL-SCSR). The proposed DPL-SCSR utilizes the binary label matrix of dictionary atoms to project the representation into the corresponding label space of the training samples. By imposing a non-negative constraint, the learned representation adaptively approximates a block-diagonal structure. This innovative transformation is also capable of controlling the scale of the block-diagonal representation by enforcing the sum of within-class coefficients of each sample to 1, which means that the dictionary atoms of each class compete to represent the samples from the same class. This implies that the requirement of similarity preservation is considered from the perspective of the constraint on the sum of coefficients. More importantly, the DPL-SCSR does not need to design a classifier in the representation space as the label matrix of the dictionary can also be used as an efficient linear classifier. Finally, the DPL-SCSR imposes the l2,p -norm on the analysis dictionary to make the process of feature extraction more interpretable. The DPL-SCSR seamlessly incorporates the scale-constrained structured representation learning, within-class similarity preservation of representation, and the linear classifier into one regularization term, which dramatically reduces the complexity of training and parameter tuning. The experimental results on several popular image classification datasets show that our DPL-SCSR can deliver superior performance compared with the state-of-the-art (SOTA) dictionary learning methods. The MATLAB code of this article is available at https://github.com/chenzhe207/DPL-SCSR.
Collapse
|
3
|
Zhao H, Li Z, Chen W, Zheng Z, Xie S. Accelerated Partially Shared Dictionary Learning With Differentiable Scale-Invariant Sparsity for Multi-View Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8825-8839. [PMID: 35254997 DOI: 10.1109/tnnls.2022.3153310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multiview dictionary learning (DL) is attracting attention in multiview clustering due to the efficient feature learning ability. However, most existing multiview DL algorithms are facing problems in fully utilizing consistent and complementary information simultaneously in the multiview data and learning the most precise representation for multiview clustering because of gaps between views. This article proposes an efficient multiview DL algorithm for multiview clustering, which uses the partially shared DL model with a flexible ratio of shared sparse coefficients to excavate both consistency and complementarity in the multiview data. In particular, a differentiable scale-invariant function is used as the sparsity regularizer, which considers the absolute sparsity of coefficients as the l0 norm regularizer but is continuous and differentiable almost everywhere. The corresponding optimization problem is solved by the proximal splitting method with extrapolation technology; moreover, the proximal operator of the differentiable scale-invariant regularizer can be derived. The synthetic experiment results demonstrate that the proposed algorithm can recover the synthetic dictionary well with reasonable convergence time costs. Multiview clustering experiments include six real-world multiview datasets, and the performances show that the proposed algorithm is not sensitive to the regularizer parameter as the other algorithms. Furthermore, an appropriate coefficient sharing ratio can help to exploit consistent information while keeping complementary information from multiview data and thus enhance performances in multiview clustering. In addition, the convergence performances show that the proposed algorithm can obtain the best performances in multiview clustering among compared algorithms and can converge faster than compared multiview algorithms mostly.
Collapse
|
4
|
Li J, Wei X, Li Q, Zhang Y, Li Z, Li J, Wang J. Proximal gradient nonconvex optimization algorithm for the slice-based ℓ0-constrained convolutional dictionary learning. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
5
|
Li Z, Xie Y, Zeng K, Xie S, Kumara BT. Adaptive sparsity-regularized deep dictionary learning based on lifted proximal operator machine. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
6
|
Distributed Robust Dictionary Pair Learning and Its Application to Aluminum Electrolysis Industrial Process. Processes (Basel) 2022. [DOI: 10.3390/pr10091850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In modern industrial systems, high-dimensional process data provide rich information for process monitoring. To make full use of local information of industrial process, a distributed robust dictionary pair learning (DRDPL) is proposed for refined process monitoring. Firstly, the global system is divided into several sub-blocks based on the reliable prior knowledge of industrial processes, which achieves dimensionality reduction and reduces process complexity. Secondly, a robust dictionary pair learning (RDPL) method is developed to build a local monitoring model for each sub-block. The sparse constraint with l2,1 norm is added to the analytical dictionary, and a low rank constraint is applied to the synthetical dictionary, so as to obtain robust dictionary pairs. Then, Bayesian inference method is introduced to fuse local monitoring information to global anomaly detection, and the block contribution index and variable contribution index are used to realize anomaly isolation. Finally, the effectiveness of the proposed method is verified by a numerical simulation experiment and Tennessee Eastman benchmark tests, and the proposed method is then successfully applied to a real-world aluminum electrolysis process.
Collapse
|
7
|
Labeled projective dictionary pair learning: application to handwritten numbers recognition. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.070] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
8
|
Chen Z, Wu XJ, Kittler J. Relaxed Block-Diagonal Dictionary Pair Learning With Locality Constraint for Image Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3645-3659. [PMID: 33764879 DOI: 10.1109/tnnls.2021.3053941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a novel structured analysis-synthesis dictionary pair learning method for efficient representation and image classification, referred to as relaxed block-diagonal dictionary pair learning with a locality constraint (RBD-DPL). RBD-DPL aims to learn relaxed block-diagonal representations of the input data to enhance the discriminability of both analysis and synthesis dictionaries by dynamically optimizing the block-diagonal components of representation, while the off-block-diagonal counterparts are set to zero. In this way, the learned synthesis subdictionary is allowed to be more flexible in reconstructing the samples from the same class, and the analysis dictionary effectively transforms the original samples into a relaxed coefficient subspace, which is closely associated with the label information. Besides, we incorporate a locality-constraint term as a complement of the relaxation learning to enhance the locality of the analytical encoding so that the learned representation exhibits high intraclass similarity. A linear classifier is trained in the learned relaxed representation space for consistent classification. RBD-DPL is computationally efficient because it avoids both the use of class-specific complementary data matrices to learn discriminative analysis dictionary, as well as the time-consuming l1/l0 -norm sparse reconstruction process. The experimental results demonstrate that our RBD-DPL achieves at least comparable or better recognition performance than the state-of-the-art algorithms. Moreover, both the training and testing time are significantly reduced, which verifies the efficiency of our method. The MATLAB code of the proposed RBD-DPL is available at https://github.com/chenzhe207/RBD-DPL.
Collapse
|
9
|
Salient double reconstruction-based discriminative projective dictionary pair learning for crowd counting. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03607-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Yan C, Zhang Y. Inverse Representation Inspired Multi-Resolution Dictionary Learning Method for Face Recognition. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422560122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Face recognition is widely used and is one of the most challenging tasks in computer vision. In recent years, many face recognition methods based on dictionary learning have been proposed. However, most methods only focus on the resolution of the original image, and the change of resolution may affect the recognition results when dealing with practical problems. Aiming at the above problems, a method of multi-resolution dictionary learning combined with sample reverse representation is proposed and applied to face recognition. First, the dictionaries associated with multiple resolution images are learnt to obtain the first representation error. Then different auxiliary samples are generated for each test sample, and a dictionary consisted of test sample, auxiliary samples, and other classes of training samples is established to sequentially represent all training samples at this resolution, and to obtain the second representation error. Finally, a weighted fusion scheme is used to obtain the ultimate classification result. Experimental results on four widely used face datasets show that the proposed method achieves better performance and is effective for resolution change.
Collapse
Affiliation(s)
- Chunman Yan
- School of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, P. R. China
| | - Yuyao Zhang
- School of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, P. R. China
| |
Collapse
|
11
|
Fan Z, Zhang H, Zhang Z, Lu G, Zhang Y, Wang Y. A survey of crowd counting and density estimation based on convolutional neural network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.02.103] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Structured classifier-based dictionary pair learning for pattern classification. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-021-01046-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
Jin X, Wu Y, Xu Y, Sun C. Research on image sentiment analysis technology based on sparse representation. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2022. [DOI: 10.1049/cit2.12074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Affiliation(s)
- Xiaofang Jin
- College of Information and Communication Engineering Communication University of China Beijing China
| | - Yinan Wu
- College of Information and Communication Engineering Communication University of China Beijing China
| | - Ying Xu
- College of Information and Communication Engineering Communication University of China Beijing China
- Academy of Broadcasting Science Beijing China
| | - Chang Sun
- College of Information and Communication Engineering Communication University of China Beijing China
| |
Collapse
|
14
|
Multi-level dictionary learning for fine-grained images categorization with attention model. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
15
|
Zhang Z, Sun Y, Wang Y, Zhang Z, Zhang H, Liu G, Wang M. Twin-Incoherent Self-Expressive Locality-Adaptive Latent Dictionary Pair Learning for Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:947-961. [PMID: 32310782 DOI: 10.1109/tnnls.2020.2979748] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The projective dictionary pair learning (DPL) model jointly seeks a synthesis dictionary and an analysis dictionary by extracting the block-diagonal coefficients with an incoherence-constrained analysis dictionary. However, DPL fails to discover the underlying subspaces and salient features at the same time, and it cannot encode the neighborhood information of the embedded coding coefficients, especially adaptively. In addition, although the data can be well reconstructed via the minimization of the reconstruction error, useful distinguishing salient feature information may be lost and incorporated into the noise term. In this article, we propose a novel self-expressive adaptive locality-preserving framework: twin-incoherent self-expressive latent DPL (SLatDPL). To capture the salient features from the samples, SLatDPL minimizes a latent reconstruction error by integrating the coefficient learning and salient feature extraction into a unified model, which can also be used to simultaneously discover the underlying subspaces and salient features. To make the coefficients block diagonal and ensure that the salient features are discriminative, our SLatDPL regularizes them by imposing a twin-incoherence constraint. Moreover, SLatDPL utilizes a self-expressive adaptive weighting strategy that uses normalized block-diagonal coefficients to preserve the locality of the codes and salient features. SLatDPL can use the class-specific reconstruction residual to handle new data directly. Extensive simulations on several public databases demonstrate the satisfactory performance of our SLatDPL compared with related methods.
Collapse
|
16
|
Sun Y, Ren Z, Yang C, Sun Q, Chen L, Ou Y. Face image set classification with self-weighted latent sparse discriminative learning. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05479-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
17
|
Ren J, Zhang Z, Li S, Wang Y, Liu G, Yan S, Wang M. Learning Hybrid Representation by Robust Dictionary Learning in Factorized Compressed Space. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3941-3956. [PMID: 31944974 DOI: 10.1109/tip.2020.2965289] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we investigate the robust dictionary learning (DL) to discover the hybrid salient low-rank and sparse representation in a factorized compressed space. A Joint Robust Factorization and Projective Dictionary Learning (J-RFDL) model is presented. The setting of J-RFDL aims at improving the data representations by enhancing the robustness to outliers and noise in data, encoding the reconstruction error more accurately and obtaining hybrid salient coefficients with accurate reconstruction ability. Specifically, J-RFDL performs the robust representation by DL in a factorized compressed space to eliminate the negative effects of noise and outliers on the results, which can also make the DL process efficient. To make the encoding process robust to noise in data, J-RFDL clearly uses sparse L2, 1-norm that can potentially minimize the factorization and reconstruction errors jointly by forcing rows of the reconstruction errors to be zeros. To deliver salient coefficients with good structures to reconstruct given data well, J-RFDL imposes the joint low-rank and sparse constraints on the embedded coefficients with a synthesis dictionary. Based on the hybrid salient coefficients, we also extend J-RFDL for the joint classification and propose a discriminative J-RFDL model, which can improve the discriminating abilities of learnt coefficients by minimizing the classification error jointly. Extensive experiments on public datasets demonstrate that our formulations can deliver superior performance over other state-of-the-art methods.
Collapse
|