6 resultados para Interaction human robot
em Acceda, el repositorio institucional de la Universidad de Las Palmas de Gran Canaria. España
Resumo:
[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.
Resumo:
[EN]Enabling natural human-robot interaction using computer vision based applications requires fast and accurate hand detection. However, previous works in this field assume different constraints, like a limitation in the number of detected gestures, because hands are highly complex objects difficult to locate. This paper presents an approach which integrates temporal coherence cues and hand detection based on wrists using a cascade classifier. With this approach, we introduce three main contributions: (1) a transparent initialization mechanism without user participation for segmenting hands independently of their gesture, (2) a larger number of detected gestures as well as a faster training phase than previous cascade classifier based methods and (3) near real-time performance for hand pose detection in video streams.
Resumo:
The physical appearance and behavior of a robot is an important asset in terms of Human-Computer Interaction. Multimodality is also fundamental, as we humans usually expect to interact in a natural way with voice, gestures, etc. People approach complex interaction devices with stances similar to those used in their interaction with other people. In this paper we describe a robot head, currently under development, that aims to be a multimodal (vision, voice, gestures,...) perceptual user interface.
Resumo:
[EN] Iron is essential for oxygen transport because it is incorporated in the heme of the oxygen-binding proteins hemoglobin and myoglobin. An interaction between iron homeostasis and oxygen regulation is further suggested during hypoxia, in which hemoglobin and myoglobin syntheses have been reported to increase. This study gives new insights into the changes in iron content and iron-oxygen interactions during enhanced erythropoiesis by simultaneously analyzing blood and muscle samples in humans exposed to 7 to 9 days of high altitude hypoxia (HA). HA up-regulates iron acquisition by erythroid cells, mobilizes body iron, and increases hemoglobin concentration. However, contrary to our hypothesis that muscle iron proteins and myoglobin would also be up-regulated during HA, this study shows that HA lowers myoglobin expression by 35% and down-regulates iron-related proteins in skeletal muscle, as evidenced by decreases in L-ferritin (43%), transferrin receptor (TfR; 50%), and total iron content (37%). This parallel decrease in L-ferritin and TfR in HA occurs independently of increased hypoxia-inducible factor 1 (HIF-1) mRNA levels and unchanged binding activity of iron regulatory proteins, but concurrently with increased ferroportin mRNA levels, suggesting enhanced iron export. Thus, in HA, the elevated iron requirement associated with enhanced erythropoiesis presumably elicits iron mobilization and myoglobin down-modulation, suggesting an altered muscle oxygen homeostasis.
Resumo:
[EN]Social robots are receiving much interest in the robotics community. The most important goal for such robots lies in their interaction capabilities. An attention system is crucial, both as a filter to center the robot’s perceptual resources and as a mean of letting the observer know that the robot has intentionality. In this paper a simple but flexible and functional attentional model is described. The model, which has been implemented in an interactive robot currently under development, fuses both visual and auditive information extracted from the robot’s environment, and can incorporate knowledge-based influences on attention.