Pang Y, Zhou B, Nie F. Simultaneously Learning Neighborship and Projection Matrix for Supervised Dimensionality Reduction.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019;
30:2779-2793. [PMID:
30640633 DOI:
10.1109/tnnls.2018.2886317]
[Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Explicitly or implicitly, most dimensionality reduction methods need to determine which samples are neighbors and the similarities between the neighbors in the original high-dimensional space. The projection matrix is then learnt on the assumption that the neighborhood information, e.g., the similarities, are known and fixed prior to learning. However, it is difficult to precisely measure the intrinsic similarities of samples in high-dimensional space because of the curse of dimensionality. Consequently, the neighbors selected according to such similarities and the projection matrix obtained according to such similarities and the corresponding neighbors might not be optimal in the sense of classification and generalization. To overcome this drawback, in this paper, we propose to let the similarities and neighbors be variables and model these in a low-dimensional space. Both the optimal similarity and projection matrix are obtained by minimizing a unified objective function. Nonnegative and sum-to-one constraints on the similarity are adopted. Instead of empirically setting the regularization parameter, we treat it as a variable to be optimized. It is interesting that the optimal regularization parameter is adaptive to the neighbors in a low-dimensional space and has an intuitive meaning. Experimental results on the YALE B, COIL-100, and MNIST data sets demonstrate the effectiveness of the proposed method.
Collapse