Improving robot vision models for object detection through interaction


Autoria(s): Leitner, J.; Forster, A.; Schmidhuber, J.
Data(s)

2014

Resumo

We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.

Formato

application/pdf

Identificador

http://eprints.qut.edu.au/82592/

Publicador

IEEE

Relação

http://eprints.qut.edu.au/82592/7/82592.pdf

DOI:10.1109/IJCNN.2014.6889556

Leitner, J., Forster, A., & Schmidhuber, J. (2014) Improving robot vision models for object detection through interaction. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN 2014), IEEE, Beijing, China, pp. 3355-3362.

Direitos

Copyright 2014 by IEEE

Fonte

ARC Centre of Excellence for Robotic Vision; School of Electrical Engineering & Computer Science; Science & Engineering Faculty

Palavras-Chave #CGP #Cartesian genetic programming #Humanoid robot #Machine learning technique #Object detection #Object manipulation actions #Robot vision model #Visual detection tasks #Visual identification tasks #Visual object representations
Tipo

Conference Paper