844 resultados para human behavior recognition
Resumo:
An evolutionary model of human behavior should privilege emotions: essential, phylogenetically ancient behaviors that learning and decision making only subserve. Infants and non-mammals lack advanced cognitive powers but still survive. Decision making is only a means to emotional ends, which organize and prioritize behavior. The emotion of pride/shame, or dominance striving, bridges the social and biological sciences via internalization of cultural norms.
Resumo:
Human Activity Recognition (HAR) is an emerging research field with the aim to identify the actions carried out by a person given a set of observations and the surrounding environment. The wide growth in this research field inside the scientific community is mainly explained by the high number of applications that are arising in the last years. A great part of the most promising applications are related to the healthcare field, where it is possible to track the mobility of patients with motor dysfunction as also the physical activity in patients with cardiovascular risk. Until a few years ago, by using distinct kind of sensors, a patient follow-up was possible. However, far from being a long-term solution and with the smartphone irruption, that monitoring can be achieved in a non-invasive way by using the embedded smartphone’s sensors. For these reasons this Final Degree Project arises with the main target to evaluate new feature extraction techniques in order to carry out an activity and user recognition, and also an activity segmentation. The recognition is done thanks to the inertial signals integration obtained by two widespread sensors in the greater part of smartphones: accelerometer and gyroscope. In particular, six different activities are evaluated walking, walking-upstairs, walking-downstairs, sitting, standing and lying. Furthermore, a segmentation task is carried out taking into account the activities performed by thirty users. This can be done by using Hidden Markov Models and also a set of tools tested satisfactory in speech recognition: HTK (Hidden Markov Model Toolkit).
Resumo:
El Reconocimiento de Actividades Humanas es un área de investigación emergente, cuyo objetivo principal es identificar las acciones realizadas por un sujeto analizando las señales obtenidas a partir de unos sensores. El rápido crecimiento de este área de investigación dentro de la comunidad científica se explica, en parte, por el elevado número de aplicaciones que están surgiendo en los últimos años. Gran parte de las aplicaciones más prometedoras se encuentran en el campo de la salud, donde se puede hacer un seguimiento del nivel de movilidad de pacientes con trastornos motores, así como monitorizar el nivel de actividad física en pacientes con riesgo cardiovascular. Hasta hace unos años, mediante el uso de distintos tipos de sensores se podía hacer un seguimiento del paciente. Sin embargo, lejos de ser una solución a largo plazo y gracias a la irrupción del teléfono inteligente, este seguimiento se puede hacer de una manera menos invasiva, haciendo uso de la gran variedad de sensores integrados en este tipo de dispositivos. En este contexto nace este Trabajo de Fin de Grado, cuyo principal objetivo es evaluar nuevas técnicas de extracción de características para llevar a cabo un reconocimiento de actividades y usuarios así como una segmentación de aquellas. Este reconocimiento se hace posible mediante la integración de señales inerciales obtenidas por dos sensores presentes en la gran mayoría de teléfonos inteligentes: acelerómetro y giróscopo. Concretamente, se evalúan seis tipos de actividades realizadas por treinta usuarios: andar, subir escaleras, bajar escaleras, estar sentado, estar de pie y estar tumbado. Además y de forma paralela, se realiza una segmentación temporal de los distintos tipos de actividades realizadas por dichos usuarios. Todo ello se llevará a cabo haciendo uso de los Modelos Ocultos de Markov, así como de un conjunto de herramientas probadas satisfactoriamente en reconocimiento del habla: HTK (Hidden Markov Model Toolkit).
Resumo:
Combinatorial IgG Fab phage display libraries prepared from a systemic lupus erythematosus (SLE) donor and a healthy donor were affinity selected against human placental DNA. Human monoclonal antibody Fab fragments specific for DNA were isolated from both libraries, although Fabs of the highest affinity were isolated only from the lupus library. Generally, apparent affinities of the Fabs for human placental DNA, purified double-stranded DNA, and denatured DNA were approximately equivalent. Surface plasmon resonance indicated Fab binding constants for a double-stranded oligodeoxynucleotide of 0.2-1.3 x 10(8) M-1. The higher-affinity Fabs, as ranked by binding to human placental DNA or to the oligonucleotide probe, tested positive in the Crithidia luciliae assay commonly used in the diagnosis of SLE, and interestingly the genes encoding the heavy-chain variable regions of these antibodies displayed evidence of only minimal somatic hypermutation. The heavy chains of the SLE Fabs were characterized by a predominance of basic residues toward the N terminus of complementarity-determining region 3 (CDR3). The crucial role of heavy-chain CDR3 (HCDR3) in high-affinity DNA recognition was suggested by the creation of DNA binding in an unrelated antibody by HCDR3 transplantation from SLE antibodies. We propose that high-affinity DNA-binding antibodies can arise in SLE without extensive somatic hypermutation in the variable-region genes because of the expression of inappropriate HCDR3s.
Resumo:
For many years, humans and machines have shared the same physical space. To facilitate their interaction with humans, their social integration and for more rational behavior has been sought that the robots demonstrate human-like behavior. For this it is necessary to understand how human behavior is generated, discuss what tasks are performed and how relate to themselves, for subsequent implementation in robots. In this paper, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this work has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
New low cost sensors and the new open free libraries for 3D image processing are permitting to achieve important advances for robot vision applications such as tridimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a method to recognize the human hand and to track the fingers is proposed. This new method is based on point clouds from range images, RGBD. It does not require visual marks, camera calibration, environment knowledge and complex expensive acquisition systems. Furthermore, this method has been implemented to create a human interface in order to move a robot hand. The human hand is recognized and the movement of the fingers is analyzed. Afterwards, it is imitated from a Barret hand, using communication events programmed from ROS.
Resumo:
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.
Resumo:
Federal Aviation Administration, Washington, D.C.
Resumo:
"Partial bibliography": p. 425-436.
Resumo:
"NAVTRA 10058-A"
Resumo:
"A technical account of this study appears as part 2 of a report entitled 'Motive patterns of managers and specialists!'"
Resumo:
This paper presents the novel theory for performing multi-agent activity recognition without requiring large training corpora. The reduced need for data means that robust probabilistic recognition can be performed within domains where annotated datasets are traditionally unavailable. Complex human activities are composed from sequences of underlying primitive activities. We do not assume that the exact temporal ordering of primitives is necessary, so can represent complex activity using an unordered bag. Our three-tier architecture comprises low-level video tracking, event analysis and high-level inference. High-level inference is performed using a new, cascading extension of the Rao–Blackwellised Particle Filter. Simulated annealing is used to identify pairs of agents involved in multi-agent activity. We validate our framework using the benchmarked PETS 2006 video surveillance dataset and our own sequences, and achieve a mean recognition F-Score of 0.82. Our approach achieves a mean improvement of 17% over a Hidden Markov Model baseline.