Jiang Y, Liu Y, Lei Y, Wang Q. Supervised preserving projection for learning scene information based on time-of-flight imaging sensor.
APPLIED OPTICS 2013;
52:5279-5288. [PMID:
23872777 DOI:
10.1364/ao.52.005279]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2013] [Accepted: 06/24/2013] [Indexed: 06/02/2023]
Abstract
In this paper, we propose a new supervised manifold learning approach, supervised preserving projection (SPP), for the depth images of a 3D imaging sensor based on the time-of-flight (TOF) principle. We present a novel manifold sense to learn scene information produced by the TOF camera along with depth images. First, we use a local surface patch to approximate the underlying manifold structures represented by the scene information. The fundamental idea is that, because TOF data have nonstatic noise and distance ambiguity problems, the surface patches can more efficiently approximate the local neighborhood structures of the underlying manifold than TOF data points, and they are robust to the nonstatic noise of TOF data. Second, we propose SPP to preserve the pairwise similarity between the local neighboring patches in TOF depth images. Moreover, SPP accomplishes the low-dimensional embedding by adding the scene region class label information accompanying the training samples and obtains the predictive mapping by incorporating the local geometrical properties of the dataset. The proposed approach has advantages of both classical linear and nonlinear manifold learning, and real-time estimation results of the test samples are obtained by the low-dimensional embedding and the predictive mapping. Experiments show that our approach obtains information effectively from three scenes and is robust to the nonstatic noise of 3D imaging sensor data.
Collapse