Using Recurrent Networks for Dimensionality Reduction


Autoria(s): Jones, Michael J.
Data(s)

20/10/2004

20/10/2004

01/09/1992

Resumo

This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop.

Formato

2167097 bytes

1325986 bytes

application/postscript

application/pdf

Identificador

AITR-1396

http://hdl.handle.net/1721.1/7045

Idioma(s)

en_US

Relação

AITR-1396