vSpeak: Edge detection based feature extraction for sign to text conversion


Autoria(s): Afeal, A.H.; Tariq, A.; Nasir, C.S.
Data(s)

2009

Resumo

This paper presents 'vSpeak', the first initiative taken in Pakistan for ICT enabled conversion of dynamic Sign Urdu gestures into natural language sentences. To realize this, vSpeak has adopted a novel approach for feature extraction using edge detection and image compression which gives input to the Artificial Neural Network that recognizes the gesture. This technique caters for the blurred images as well. The training and testing is currently being performed on a dataset of 200 patterns of 20 words from Sign Urdu with target accuracy of 90% and above.

Identificador

http://eprints.qut.edu.au/93913/

Relação

Afeal, A.H., Tariq, A., & Nasir, C.S. (2009) vSpeak: Edge detection based feature extraction for sign to text conversion. In 2009 International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV 2009), 13-16 July 2009, Las Vegas, NV.

Fonte

Faculty of Health; Institute of Health and Biomedical Innovation; School of Public Health & Social Work

Palavras-Chave #Feature extraction #Gesture recognition #Image compression
Tipo

Conference Paper