Learning with Deictic Representation


Autoria(s): Finney, Sarah; Gardiol, Natalia H.; Kaelbling, Leslie Pack; Oates, Tim
Data(s)

08/10/2004

08/10/2004

10/04/2002

Resumo

Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a naive propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.

Formato

41 p.

5712208 bytes

1294450 bytes

application/postscript

application/pdf

Identificador

AIM-2002-006

http://hdl.handle.net/1721.1/6685

Idioma(s)

en_US

Relação

AIM-2002-006

Palavras-Chave #AI #Reinforcement Learning #Partial Observability #Representations