3 resultados para Tomasi, Dominic
em Universidade do Minho
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
When interacting with each other, people often synchronize spontaneously their movements, e.g. during pendulum swinging, chair rocking[5], walking [4][7], and when executing periodic forearm movements[3].Although the spatiotemporal information that establishes the coupling, leading to synchronization, might be provided by several perceptual systems, the systematic study of different sensory modalities contribution is widely neglected. Considering a) differences in the sensory dominance on the spatial and temporal dimension[5] , b) different cue combination and integration strategies [1][2], and c) that sensory information might provide different aspects of the same event, synchronization should be moderated by the type of sensory modality. Here, 9 naïve participants placed a bottle periodically between two target zones, 40 times, in 12 conditions while sitting in front of a confederate executing the same task. The participant could a) see and hear, b) see , c) hear the confederate, d) or audiovisual information about the movements of the confederate was absent. The couple started in 3 different relative positions (i.e., in-phase, anti-phase, out of phase). A retro-reflective marker was attached to the top of the bottles. Bottle displacement was captured by a motion capture system. We analyzed the variability of the continuous relative phase reflecting the degree of synchronization. Results indicate the emergence of spontaneous synchronization, an increase with bimodal information, and an influence of the initial phase relation on the particular synchronization pattern. Results have theoretical implication for studying cue combination in interpersonal coordination and are consistent with coupled oscillator models.
Resumo:
[Excerpt] Synchronization of periodic movements like side-by-side walking [7] is frequently modeled by coupled oscillators [5] and the coupling strength is defined quantitatively [3]. In contrast, in most studies on sensorimotor synchronization (SMS), simple movements like finger taps are synchronized with simple stimuli like metronomes [4]. While the latter paradigm simplifies matters and allows for the assessment of the relative weights of sensory modalities through systematic variation of the stimuli [1], it might lack ecological validity. Conversely, using more complex movements and stimuli might complicate the specification of mechanisms underlying coupling. We merged the positive aspects of both approaches to study the contribution of auditory and visual information on synchronization during side-by-side walking. As stimuli, we used Point Light Walkers (PLWs) and auralized steps sound; both were constructed from previously captured walking individuals [2][6]. PLWs were retro-projected on a screen and matched according to gender, hip height, and velocity. The participant walked for 7.20m side by side with 1) a PLW, 2) steps sound, or 3) both displayed in temporal congruence. Instruction to participants was to synchronize with the available stimuli. [...]