Robust vision-based underwater homing using self-similar landmarks


Autoria(s): Negre, Amaury; Pradalier, Cedric; Dunbabin, Matthew
Data(s)

01/06/2008

Resumo

Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions.

Identificador

http://eprints.qut.edu.au/63608/

Publicador

Wiley Periodicals, Inc

Relação

DOI:10.1002/rob.20246

Negre, Amaury, Pradalier, Cedric, & Dunbabin, Matthew (2008) Robust vision-based underwater homing using self-similar landmarks. Journal of Field Robotics, 25(6/7), pp. 360-377.

Fonte

School of Electrical Engineering & Computer Science; Institute for Future Environments; Science & Engineering Faculty

Palavras-Chave #080104 Computer Vision #Self-similar landmarks #Autonomous underwater vehicles #Target tracking
Tipo

Journal Article