9 resultados para Geographical computer applications

em Universidade do Minho


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Relatório de atividade profissional de mestrado em Direito Judiciário

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Ensino de Informática

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tese de Doutoramento em Engenharia de Eletrónica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses how object-oriented iuheritance can be re-interpreted if statecharts are used for modelling the dynamic behaviour of an object. The support of inheritance of statecharts allows the improvement of systems' development by easing the reutilization of parts of already developed euccessful systems, aad by promoting the iterative and continuous models' refinement advocated by the operatioaal approach. Statechart is the formalism used within UML to specify reactive state.based behaviours. This paper covers the use of statecharts within the modelling of embedded systems for industrial control applxications, where performance and memory usage are main concerns.