Gaussian processes autoencoder for dimensionality reduction


Autoria(s): Jiang, X.; Gao, J.; Hong, Xia; Cai, Z.
Data(s)

2014

Resumo

Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.

Formato

text

Identificador

http://centaur.reading.ac.uk/39730/1/GP%20Autoencoder_V1.3.pdf

Jiang, X., Gao, J., Hong, X. <http://centaur.reading.ac.uk/view/creators/90000432.html> and Cai, Z. (2014) Gaussian processes autoencoder for dimensionality reduction. In: Part II of Proceeding 18th Pacific-Asia Conference, PAKDD 2014, May 13-16, 2014, Tainan, Taiwan.

Idioma(s)

en

Relação

http://centaur.reading.ac.uk/39730/

creatorInternal Hong, Xia

http://dx.doi.org/10.1007/978-3-319-06605-9_6

Tipo

Conference or Workshop Item

PeerReviewed