Representation and learning of visual information for pose recognition


Autoria(s): Prasser, David; Wyeth, Gordon
Contribuinte(s)

Roberts, Jonathan

Wyeth, Gordon

Data(s)

2003

Resumo

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/32820/

Publicador

Australian Robotics and Automation Association Inc

Relação

http://eprints.qut.edu.au/32820/1/c32820.pdf

http://www.araa.asn.au/acra/acra2003/papers/49.pdf

Prasser, David & Wyeth, Gordon (2003) Representation and learning of visual information for pose recognition. In Roberts, Jonathan & Wyeth, Gordon (Eds.) Proceedings of the Australasian Conference on Robotics and Automation, 2003, Australian Robotics and Automation Association Inc, Brisbane, Queensland.

Direitos

Copyright 2003 [please consult the authors]

Palavras-Chave #080101 Adaptive Agents and Intelligent Robotics
Tipo

Conference Paper