6 resultados para Gestures
em Acceda, el repositorio institucional de la Universidad de Las Palmas de Gran Canaria. España
Resumo:
Se analiza el comportamiento del perro (Canis lupus familiaris) en función de una serie de movimientos gestuales realizados por humanos (dueño, persona conocida y desconocido). Se observó que el patrón de comportamiento viene determinado por la persona que realiza los gestos. La respuesta de comportamiento se asocia al reconocimiento, o no, de una estructura jerárquica interespecífica, donde el dueño del animal ocupa el mayor rango. Ante personas extrañas el perro no reconoce dicha jerarquización, cambiando claramente su respuesta ante los mismos movimientos gestuales. / The behavior of the dog (Canis lupus familiaris) depending on different universal gestural movements made by different humans individuals was studied (dog owner, person known by the dog and person unknown by the dog). It was observed that the decisive factor is the individual who performs the gestures. The behavioral response is associated with the recognition or not of an interspecific hierarchical structure, and the owner of the animal occupies the highest rank. With strangers the dog does not recognize this hierarchy, changing their response to the same gestural movements.
Resumo:
[ES]Mediation is a process that assigns meaning through the discourse of the disputants whereby signals that are verbally sent become rare indicators to dig for full communication. If reading nonverbal language was not taken into account, we would lose approximately 93% of the total information conveyed by someone; for this reason, it is necessary to know the most recurrent gestures to, thus, foster mediators’ communication skills in order to substantially improve their discourse and understanding. This paper focuses on arguing why nonverbal language is crucial for the mediation process on the basis of the analysis of the theoretical framework of the matter and its implications for mediation.
Resumo:
[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.
Resumo:
[EN]Vision-based applications designed for humanmachine interaction require fast and accurate hand detection. However, previous works on this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects to locate. This paper presents an approach which changes the detection target without limiting the number of detected gestures. Using a cascade classifier we detect hands based on their wrists. With this approach, we introduce two main contributions: (1) a reliable segmentation, independently of the gesture being made and (2) a training phase faster than previous cascade classifier based methods. The paper includes experimental evaluations with different video streams that illustrate the efficiency and suitability for perceptual interfaces.
Resumo:
The physical appearance and behavior of a robot is an important asset in terms of Human-Computer Interaction. Multimodality is also fundamental, as we humans usually expect to interact in a natural way with voice, gestures, etc. People approach complex interaction devices with stances similar to those used in their interaction with other people. In this paper we describe a robot head, currently under development, that aims to be a multimodal (vision, voice, gestures,...) perceptual user interface.
Resumo:
[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.