1
|
Designing a supervised feature selection technique for mixed attribute data analysis. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
2
|
Wang J, Xie F, Nie F, Li X. Unsupervised Adaptive Embedding for Dimensionality Reduction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6844-6855. [PMID: 34101602 DOI: 10.1109/tnnls.2021.3083695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
High-dimensional data are highly correlative and redundant, making it difficult to explore and analyze. Amount of unsupervised dimensionality reduction (DR) methods has been proposed, in which constructing a neighborhood graph is the primary step of DR methods. However, there exist two problems: 1) the construction of graph is usually separate from the selection of projection direction and 2) the original data are inevitably noisy. In this article, we propose an unsupervised adaptive embedding (UAE) method for DR to solve these challenges, which is a linear graph-embedding method. First, an adaptive allocation method of neighbors is proposed to construct the affinity graph. Second, the construction of affinity graph and calculation of projection matrix are integrated together. It considers the local relationship between samples and global characteristic of high-dimensional data, in which the cleaned data matrix is originally proposed to remove noise in subspace. The relationship between our method and local preserving projections (LPPs) is also explored. Finally, an alternative iteration optimization algorithm is derived to solve our model, the convergence and computational complexity of which are also analyzed. Comprehensive experiments on synthetic and benchmark datasets illustrate the superiority of our method.
Collapse
|
3
|
Guehairia O, Dornaika F, Ouamane A, Taleb-Ahmed A. Facial Age Estimation Using Tensor Based Subspace Learning and Deep Random Forests. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.135] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
4
|
Ran R, Feng J, Zhang S, Fang B. A General Matrix Function Dimensionality Reduction Framework and Extension for Manifold Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:2137-2148. [PMID: 32697725 DOI: 10.1109/tcyb.2020.3003620] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many dimensionality reduction methods in the manifold learning field have the so-called small-sample-size (SSS) problem. Starting from solving the SSS problem, we first summarize the existing dimensionality reduction methods and construct a unified criterion function of these methods. Then, combining the unified criterion with the matrix function, we propose a general matrix function dimensionality reduction framework. This framework is configurable, that is, one can select suitable functions to construct such a matrix transformation framework, and then a series of new dimensionality reduction methods can be derived from this framework. In this article, we discuss how to choose suitable functions from two aspects: 1) solving the SSS problem and 2) improving pattern classification ability. As an extension, with the inverse hyperbolic tangent function and linear function, we propose a new matrix function dimensionality reduction framework. Compared with the existing methods to solve the SSS problem, these new methods can obtain better pattern classification ability and have less computational complexity. The experimental results on handwritten digit, letters databases, and two face databases show the superiority of the new methods.
Collapse
|
5
|
Simple and Robust Locality Preserving Projections Based on Maximum Difference Criterion. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Khozaei B, Eftekhari M. Unsupervised Feature Selection Based on Spectral Clustering with Maximum Relevancy and Minimum Redundancy Approach. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s0218001421500312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, two novel approaches for unsupervised feature selection are proposed based on the spectral clustering. In the first proposed method, spectral clustering is employed over the features and the center of clusters is selected as well as their nearest-neighbors. These features have a minimum similarity (redundancy) between themselves since they belong to different clusters. Next, samples of data sets are clustered employing spectral clustering so that to the samples of each cluster a specific pseudo-label is assigned. After that according to the obtained pseudo-labels, the information gain of the features is computed that secures the maximum relevancy. Finally, the intersection of the selected features in the two previous steps is determined that simultaneously guarantees both the maximum relevancy and minimum redundancy. Our second proposed approach is very similar to the first one whose only but significant difference with the first method is that it selects one feature from each cluster and sorts all the features in terms of their relevancy. Then, by appending the selected features to a sorted list and ignoring them for the next step, the algorithm continues with the remaining features until all the features to be appended into the sorted list. Both of our proposed methods are compared with state-of-the-art methods and the obtained results confirm the performance of our proposed approaches especially the second one.
Collapse
Affiliation(s)
- Bahareh Khozaei
- Department of Mathematics Kerman Branch, Islamic Azad University, Kerman, Iran
| | - Mahdi Eftekhari
- Department of Computer Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| |
Collapse
|
7
|
Laiadi O, Ouamane A, Benakcha A, Taleb-Ahmed A, Hadid A. Tensor cross-view quadratic discriminant analysis for kinship verification in the wild. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.055] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
8
|
Hosseini ES, Moattar MH. Evolutionary feature subsets selection based on interaction information for high dimensional imbalanced data classification. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105581] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
9
|
Multilinear Side-Information based Discriminant Analysis for face and kinship verification in the wild. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.09.051] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
10
|
Ramachandra Murthy K, Ghosh A. Norm Discriminant Eigenspace Transform for Pattern Classification. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:273-286. [PMID: 29990212 DOI: 10.1109/tcyb.2017.2771530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Most of the supervised dimensionality reduction (DR) methods design interclass scatter as the separability between the class means, which may force to assume unimodal Gaussian likelihoods and their projection space trends toward the class means. This paper presents a novel DR approach, norm discriminant eigenspace transform (NDET), in which average norms ( l2 ) of classes have been utilized to characterize the interclass separability and the within-class distance characterizes the intraclass compactness. NDET is intended to accommodate data distributions that may be multimodal and non-Gaussian. We derive an upper bound for NDET, and a specific solution space to attain this bound. Existence of the specific solution is very unwonted, thereby we have considered the solution space of upper bound to achieve better reduction of dimensionality and discrimination of classes. Also, a nonlinear version of NDET (kernel NDET) is developed to model nonlinear relationships between the features. We show, experimentally (on synthetic data) that NDET effectively overcomes the limitations, which arise due to unimodal and data distribution assumptions of the traditional algorithms. Extensive empirical studies are made; and the proposed method is compared with closely related state-of-the-art schemes on UCI machine learning repository and face recognition data sets, to establish its novelty.
Collapse
|
11
|
Yu YF, Ren CX, Dai DQ, Huang KK. Kernel Embedding Multiorientation Local Pattern for Image Representation. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:1124-1135. [PMID: 28368841 DOI: 10.1109/tcyb.2017.2682272] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Local feature descriptor plays a key role in different image classification applications. Some of these methods such as local binary pattern and image gradient orientations have been proven effective to some extent. However, such traditional descriptors which only utilize single-type features, are deficient to capture the edges and orientations information and intrinsic structure information of images. In this paper, we propose a kernel embedding multiorientation local pattern (MOLP) to address this problem. For a given image, it is first transformed by gradient operators in local regions, which generate multiorientation gradient images containing edges and orientations information of different directions. Then the histogram feature which takes into account the sign component and magnitude component, is extracted to form the refined feature from each orientation gradient image. The refined feature captures more information of the intrinsic structure, and is effective for image representation and classification. Finally, the multiorientation refined features are automatically fused in the kernel embedding discriminant subspace learning model. The extensive experiments on various image classification tasks, such as face recognition, texture classification, object categorization, and palmprint recognition show that MOLP could achieve competitive performance with those state-of-the art methods.
Collapse
|
12
|
Wang SH, Phillips P, Dong ZC, Zhang YD. Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.08.015] [Citation(s) in RCA: 109] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
|
14
|
Lu GF, Wang Y, Zou J, Wang Z. Matrix exponential based discriminant locality preserving projections for feature extraction. Neural Netw 2018; 97:127-136. [DOI: 10.1016/j.neunet.2017.09.014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Revised: 07/21/2017] [Accepted: 09/28/2017] [Indexed: 11/17/2022]
|
15
|
Wang R, Nie F, Hong R, Chang X, Yang X, Yu W. Fast and Orthogonal Locality Preserving Projections for Dimensionality Reduction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5019-5030. [PMID: 28708560 DOI: 10.1109/tip.2017.2726188] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The locality preserving projections (LPP) algorithm is a recently developed linear dimensionality reduction algorithm that has been frequently used in face recognition and other applications. However, the projection matrix in LPP is not orthogonal, thus creating difficulties for both reconstruction and other applications. As the orthogonality property is desirable, orthogonal LPP (OLPP) has been proposed so that an orthogonal projection matrix can be obtained based on a step by step procedure; however, this makes the algorithm computationally more expensive. Therefore, in this paper, we propose a fast and orthogonal version of LPP, called FOLPP, which simultaneously minimizes the locality and maximizes the globality under the orthogonal constraint. As a result, the computation burden of the proposed algorithm can be effectively alleviated compared with the OLPP algorithm. Experimental results on two face recognition data sets and two hyperspectral data sets are presented to demonstrate the effectiveness of the proposed algorithm.
Collapse
|
16
|
|
17
|
Zhou Y, Sun S. Manifold Partition Discriminant Analysis. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:830-840. [PMID: 28113879 DOI: 10.1109/tcyb.2016.2529299] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose a novel algorithm for supervised dimensionality reduction named manifold partition discriminant analysis (MPDA). It aims to find a linear embedding space where the within-class similarity is achieved along the direction that is consistent with the local variation of the data manifold, while nearby data belonging to different classes are well separated. By partitioning the data manifold into a number of linear subspaces and utilizing the first-order Taylor expansion, MPDA explicitly parameterizes the connections of tangent spaces and represents the data manifold in a piecewise manner. While graph Laplacian methods capture only the pairwise interaction between data points, our method captures both pairwise and higher order interactions (using regional consistency) between data points. This manifold representation can help to improve the measure of within-class similarity, which further leads to improved performance of dimensionality reduction. Experimental results on multiple real-world data sets demonstrate the effectiveness of the proposed method.
Collapse
|
18
|
|
19
|
Zeng X, Bian W, Liu W, Shen J, Tao D. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:4556-4569. [PMID: 26285148 DOI: 10.1109/tip.2015.2468172] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.
Collapse
|
20
|
Fang X, Xu Y, Li X, Lai Z, Wong WK. Learning a Nonnegative Sparse Graph for Linear Regression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2760-2771. [PMID: 25910093 DOI: 10.1109/tip.2015.2425545] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.
Collapse
|
21
|
Lai ZR, Dai DQ, Ren CX, Huang KK. Discriminative and Compact Coding for Robust Face Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:1900-1912. [PMID: 25343776 DOI: 10.1109/tcyb.2014.2361770] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we propose a novel discriminative and compact coding (DCC) for robust face recognition. It introduces multiple error measurements into regression model. They collaborate to tune regression codes of different properties (sparsity, compactness, high discriminating ability, etc.), to further improve robustness and adaptivity of the regression model. We propose two types of coding models: 1) multiscale error measurements that produces sparse and highly discriminative codes and 2) inspires within-class collaborative representation that produces sparse and compact codes. The update of codes and the combination of different errors are automatically processed. DCC is also robust to the choice of parameters, producing stable regression residuals which are crucial to classification. Extensive experiments on benchmark datasets show that DCC has promising performance and outperforms other state-of-the-art regression models.
Collapse
|
22
|
Liu J, Dong J, Cai X, Qi L, Chantler M. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models. PLoS One 2015; 10:e0130335. [PMID: 26106895 PMCID: PMC4481328 DOI: 10.1371/journal.pone.0130335] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Accepted: 05/18/2015] [Indexed: 11/30/2022] Open
Abstract
Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA) and Singular Value Decomposition (SVD) to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap) to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.
Collapse
Affiliation(s)
- Jun Liu
- Department of Computer Science and Technology, Ocean University of China, 238 Songling Road, Qingdao, Shandong, China
- Science and Information College, Qingdao Agricultural University, 700 Changcheng Road, Qingdao, Shandong, China
| | - Junyu Dong
- Department of Computer Science and Technology, Ocean University of China, 238 Songling Road, Qingdao, Shandong, China
| | - Xiaoxu Cai
- Department of Computer Science and Technology, Ocean University of China, 238 Songling Road, Qingdao, Shandong, China
| | - Lin Qi
- Department of Computer Science and Technology, Ocean University of China, 238 Songling Road, Qingdao, Shandong, China
| | - Mike Chantler
- Computer Science Department, Heriot-Watt University, Edinburgh, Scotland
| |
Collapse
|
23
|
Lai ZR, Dai DQ, Ren CX, Huang KK. Multiscale logarithm difference edgemaps for face recognition against varying lighting conditions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:1735-1747. [PMID: 25751866 DOI: 10.1109/tip.2015.2409988] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Lambertian model is a classical illumination model consisting of a surface albedo component and a light intensity component. Some previous researches assume that the light intensity component mainly lies in the large-scale features. They adopt holistic image decompositions to separate it out, but it is difficult to decide the separating point between large-scale and small-scale features. In this paper, we propose to take a logarithm transform, which can change the multiplication of surface albedo and light intensity into an additive model. Then, a difference (substraction) between two pixels in a neighborhood can eliminate most of the light intensity component. By dividing a neighborhood into subregions, edgemaps of multiple scales can be obtained. Then, each edgemap is multiplied by a weight that can be determined by an independent training scheme. Finally, all the weighted edgemaps are combined to form a robust holistic feature map. Extensive experiments on four benchmark data sets in controlled and uncontrolled lighting conditions show that the proposed method has promising results, especially in uncontrolled lighting conditions, even mixed with other complicated variations.
Collapse
|
24
|
|
25
|
Zhu X, Zhang L, Huang Z. A sparse embedding and least variance encoding approach to hashing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:3737-3750. [PMID: 24968174 DOI: 10.1109/tip.2014.2332764] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Hashing is becoming increasingly important in large-scale image retrieval for fast approximate similarity search and efficient data storage. Many popular hashing methods aim to preserve the kNN graph of high dimensional data points in the low dimensional manifold space, which is, however, difficult to achieve when the number of samples is big. In this paper, we propose an effective and efficient hashing approach by sparsely embedding a sample in the training sample space and encoding the sparse embedding vector over a learned dictionary. To this end, we partition the sample space into clusters via a linear spectral clustering method, and then represent each sample as a sparse vector of normalized probabilities that it falls into its several closest clusters. This actually embeds each sample sparsely in the sample space. The sparse embedding vector is employed as the feature of each sample for hashing. We then propose a least variance encoding model, which learns a dictionary to encode the sparse embedding feature, and consequently binarize the coding coefficients as the hash codes. The dictionary and the binarization threshold are jointly optimized in our model. Experimental results on benchmark data sets demonstrated the effectiveness of the proposed approach in comparison with state-of-the-art methods.
Collapse
|