Representation and learning of visual information for pose recognition


Autoria(s): Prasser, D. P.; Wyeth, G. F.
Contribuinte(s)

J. Roberts

G. Wyeth

Data(s)

01/01/2003

Resumo

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Identificador

http://espace.library.uq.edu.au/view/UQ:99145

Idioma(s)

eng

Publicador

Australian Robotics and Automation Association (ARAA)

Palavras-Chave #E1 #280209 Intelligent Robotics #780199 Other
Tipo

Conference Paper