5 resultados para ROBOTS

em SAPIENTIA - Universidade do Algarve - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing number of robotic solutions geared to interact socially with humans, social robots, urge the study of the factors that will facilitate or hinder future human robot collaboration. Hence the research question: what are the factors that predict intention to work with a social robot in the near future. To answer this question the following socio-cognitive models were studied, the theory of reasoned action, the theory of planned behavior and the model of goal directed behavior. These models purport that all the other variables will only have an indirect effect on behavior. That is, through the variables of the model. Based on the research on robotics and social perception/ cognition, social robot appearance, belief in human nature uniqueness, perceived warmth, perceived competence, anthropomorphism, negative attitude towards robots with human traits and negative attitudes towards interactions with robots were studied for their effects on attitude towards working with a social robot, perceived behavioral control, positive anticipated emotions and negative anticipated emotions. Study 1 identified the social representation of robot. Studies 2 to 5 investigated the psychometric properties of the Portuguese version of the negative attitude towards robots scale. Study 6 investigated the psychometric properties of the belief in human nature uniqueness scale. Study 7 tested the theory of reasoned action and the theory of planned behavior. Study 8 tested the model of goal directed behavior. Studies 7 and 8 also tested the role of the external variables. Study 9 tested and compared the predictive power of the three socio-cognitive models. Finally conclusion are drawn from the research results, and future research suggestions are offered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Face detection and recognition should be complemented by recognition of facial expression, for example for social robots which must react to human emotions. Our framework is based on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth are grouped for face detection [1]; lines and edges provide information for face recognition [2].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most simultaneous localisation and mapping (SLAM) solutions were developed for navigation of non-cognitive robots. By using a variety of sensors, the distances to walls and other objects are determined, which are then used to generate a map of the environment and to update the robot’s position. When developing a cognitive robot, such a solution is not appropriate since it requires accurate sensors and precise odometry, also lacking fundamental features of cognition such as time and memory. In this paper we present a SLAM solution in which such features are taken into account and integrated. Moreover, this method does not require precise odometry nor accurate ranging sensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human-robot interaction is an interdisciplinary research area which aims at integrating human factors, cognitive psychology and robot technology. The ultimate goal is the development of social robots. These robots are expected to work in human environments, and to understand behavior of persons through gestures and body movements. In this paper we present a biological and realtime framework for detecting and tracking hands. This framework is based on keypoints extracted from cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. By combining annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated, their movements can be obtained, and they can be tracked over time. By using hand templates with keypoints at only two scales, a hand’s gestures can be recognized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ultrasonic, infrared, laser and other sensors are being applied in robotics. Although combinations of these have allowed robots to navigate, they are only suited for specific scenarios, depending on their limitations. Recent advances in computer vision are turning cameras into useful low-cost sensors that can operate in most types of environments. Cameras enable robots to detect obstacles, recognize objects, obtain visual odometry, detect and recognize people and gestures, among other possibilities. In this paper we present a completely biologically inspired vision system for robot navigation. It comprises stereo vision for obstacle detection, and object recognition for landmark-based navigation. We employ a novel keypoint descriptor which codes responses of cortical complex cells. We also present a biologically inspired saliency component, based on disparity and colour.