872 resultados para facial expressions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Playground is intended to open a window into a world, however familiar it may first appear, that resonates with one of our universal compassions and the icons of contemporary life: the act and ritual of taking photographs. This project aims to magnify the extraordinary in the ordinary, revealing facial expressions, gestures or body language of the subjects behind their own visual recording device. It is about the drama of people in their private moments, when their face or body language reveals the most hidden parts of their inner world in public places. The interplay between private and public, individual and social are the main inhabitants of this photographic project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Grigorij Kreidlin (Russia). A Comparative Study of Two Semantic Systems: Body Russian and Russian Phraseology. Mr. Kreidlin teaches in the Department of Theoretical and Applied Linguistics of the State University of Humanities in Moscow and worked on this project from August 1996 to July 1998. The classical approach to non-verbal and verbal oral communication is based on a traditional separation of body and mind. Linguists studied words and phrasemes, the products of mind activities, while gestures, facial expressions, postures and other forms of body language were left to anthropologists, psychologists, physiologists, and indeed to anyone but linguists. Only recently have linguists begun to turn their attention to gestures and semiotic and cognitive paradigms are now appearing that raise the question of designing an integral model for the unified description of non-verbal and verbal communicative behaviour. This project attempted to elaborate lexical and semantic fragments of such a model, producing a co-ordinated semantic description of the main Russian gestures (including gestures proper, postures and facial expressions) and their natural language analogues. The concept of emblematic gestures and gestural phrasemes and of their semantic links permitted an appropriate description of the transformation of a body as a purely physical substance into a body as a carrier of essential attributes of Russian culture - the semiotic process called the culturalisation of the human body. Here the human body embodies a system of cultural values and displays them in a text within the area of phraseology and some other important language domains. The goal of this research was to develop a theory that would account for the fundamental peculiarities of the process. The model proposed is based on the unified lexicographic representation of verbal and non-verbal units in the Dictionary of Russian Gestures, which the Mr. Kreidlin had earlier complied in collaboration with a group of his students. The Dictionary was originally oriented only towards reflecting how the lexical competence of Russian body language is represented in the Russian mind. Now a special type of phraseological zone has been designed to reflect explicitly semantic relationships between the gestures in the entries and phrasemes and to provide the necessary information for a detailed description of these. All the definitions, rules of usage and the established correlations are written in a semantic meta-language. Several classes of Russian gestural phrasemes were identified, including those phrasemes and idioms with semantic definitions close to those of the corresponding gestures, those phraseological units that have lost touch with the related gestures (although etymologically they are derived from gestures that have gone out of use), and phrasemes and idioms which have semantic traces or reflexes inherited from the meaning of the related gestures. The basic assumptions and practical considerations underlying the work were as follows. (1) To compare meanings one has to be able to state them. To state the meaning of a gesture or a phraseological expression, one needs a formal semantic meta-language of propositional character that represents the cognitive and mental aspects of the codes. (2) The semantic contrastive analysis of any semiotic codes used in person-to-person communication also requires a single semantic meta-language, i.e. a formal semantic language of description,. This language must be as linguistically and culturally independent as possible and yet must be open to interpretation through any culture and code. Another possible method of conducting comparative verbal-non-verbal semantic research is to work with different semantic meta-languages and semantic nets and to learn how to combine them, translate from one to another, etc. in order to reach a common basis for the subsequent comparison of units. (3) The practical work in defining phraseological units and organising the phraseological zone in the Dictionary of Russian Gestures unexpectedly showed that semantic links between gestures and gestural phrasemes are reflected not only in common semantic elements and syntactic structure of semantic propositions, but also in general and partial cognitive operations that are made over semantic definitions. (4) In comparative semantic analysis one should take into account different values and roles of inner form and image components in the semantic representation of non-verbal and verbal units. (5) For the most part, gestural phrasemes are direct semantic derivatives of gestures. The cognitive and formal techniques can be regarded as typological features for the future functional-semantic classification of gestural phrasemes: two phrasemes whose meaning can be obtained by the same cognitive or purely syntactic operations (or types of operations) over the meanings of the corresponding gestures, belong by definition to one and the same class. The nature of many cognitive operations has not been studied well so far, but the first steps towards its comprehension and description have been taken. The research identified 25 logically possible classes of relationships between a gesture and a gestural phraseme. The calculation is based on theoretically possible formal (set-theory) correlations between signifiers and signified of the non-verbal and verbal units. However, in order to examine which of them are realised in practice a complete semantic and lexicographic description of all (not only central) everyday emblems and gestural phrasemes is required and this unfortunately does not yet exist. Mr. Kreidlin suggests that the results of the comparative analysis of verbal and non-verbal units could also be used in other research areas such as the lexicography of emotions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To date, only little is known about the self-directed perception and processing of subtle gaze cues in social anxiety that might however contribute to excessive feelings of being looked at by others. Using a web-based approach, participants (n=174) were asked whether or not briefly (300 ms) presented facial expressions modulated in gaze direction (0°, 2°, 4°, 6°, 8°) and valence (angry, fearful, happy, neutral) were directed at them. The results demonstrate a positive, linear relationship between self-reported social anxiety and stronger self-directed perception of others' gaze directions, particularly for negative (angry, fearful) and neutral expressions. Furthermore, faster responding was found for gaze more clearly directed at socially anxious individuals (0°, 2°, and 4°) suggesting a tendency to avoid direct gaze. In sum, the results illustrate an altered self-directed perception of subtle gaze cues. The possibly amplifying effects of social stress on biased self-directed perception of eye gaze are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many mental disorders disrupt social skills, yet few studies have examined how the brain processes social information. Functional neuroimaging, neuroconnectivity and electrophysiological studies suggest that orbital frontal cortex plays important roles in social cognition, including the analysis of information from faces, which are important cues in social interactions. Studies in humans and non-human primates show that damage to orbital frontal cortex produces social behavior impairments, including abnormal aggression, but these studies have failed to determine whether damage to this area impairs face processing. In addition, it is not known whether damage early in life is more detrimental than damage in adulthood. This study examined whether orbital frontal cortex is necessary for the discrimination of face identity and facial expressions, and for appropriate behavioral responses to aggressive (threatening) facial expressions. Rhesus monkeys (Macaca mulatta) received selective lesions of orbital frontal cortex as newborns or adults. As adults, these animals were compared with sham-operated controls on their ability to discriminate between faces of individual monkeys and between different facial expressions of emotion. A passive visual paired-comparison task with standardized rhesus monkey face stimuli was designed and used to assess discrimination. In addition, looking behavior toward aggressive expressions was assessed and compared with that of normal control animals. The results showed that lesion of orbital frontal cortex (1) may impair discrimination between faces of individual monkeys, (2) does not impair facial expression discrimination, and (3) changes the amount of time spent looking at aggressive (threatening) facial expressions depending on the context. The effects of early and late lesions did not differ. Thus, orbital frontal cortex appears to be part of the neural circuitry for recognizing individuals and for modulating the response to aggression in faces, and the plasticity of the immature brain does not allow for recovery of these functions when the damage occurs early in life. This study opens new avenues for the assessment of rhesus monkey face processing and the neural basis of social cognition, and allows a better understanding of the nature of the neuropathology in patients with mental disorders that disrupt social behavior, such as autism. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El uso de técnicas para la monitorización del movimiento humano generalmente permite a los investigadores analizar la cinemática y especialmente las capacidades motoras en aquellas actividades de la vida cotidiana que persiguen un objetivo concreto como pueden ser la preparación de bebidas y comida, e incluso en tareas de aseo. Adicionalmente, la evaluación del movimiento y el comportamiento humanos en el campo de la rehabilitación cognitiva es esencial para profundizar en las dificultades que algunas personas encuentran en la ejecución de actividades diarias después de accidentes cerebro-vasculares. Estas dificultades están principalmente asociadas a la realización de pasos secuenciales y al reconocimiento del uso de herramientas y objetos. La interpretación de los datos sobre la actitud de este tipo de pacientes para reconocer y determinar el nivel de éxito en la ejecución de las acciones, y para ampliar el conocimiento en las enfermedades cerebrales, sus consecuencias y severidad, depende totalmente de los dispositivos usados para la captura de esos datos y de la calidad de los mismos. Más aún, existe una necesidad real de mejorar las técnicas actuales de rehabilitación cognitiva contribuyendo al diseño de sistemas automáticos para crear una especie de terapeuta virtual que asegure una vida más independiente de estos pacientes y reduzca la carga de trabajo de los terapeutas. Con este objetivo, el uso de sensores y dispositivos para obtener datos en tiempo real de la ejecución y estado de la tarea de rehabilitación es esencial para también contribuir al diseño y entrenamiento de futuros algoritmos que pudieran reconocer errores automáticamente para informar al paciente acerca de ellos mediante distintos tipos de pistas como pueden ser imágenes, mensajes auditivos o incluso videos. La tecnología y soluciones existentes en este campo no ofrecen una manera totalmente robusta y efectiva para obtener datos en tiempo real, por un lado, porque pueden influir en el movimiento del propio paciente en caso de las plataformas basadas en el uso de marcadores que necesitan sensores pegados en la piel; y por otro lado, debido a la complejidad o alto coste de implantación lo que hace difícil pensar en la idea de instalar un sistema en el hospital o incluso en la casa del paciente. Esta tesis presenta la investigación realizada en el campo de la monitorización del movimiento de pacientes para proporcionar un paso adelante en términos de detección, seguimiento y reconocimiento del comportamiento de manos, gestos y cara mediante una manera no invasiva la cual puede mejorar la técnicas actuales de rehabilitación cognitiva para la adquisición en tiempo real de datos sobre el comportamiento del paciente y la ejecución de la tarea. Para entender la importancia del marco de esta tesis, inicialmente se presenta un resumen de las principales enfermedades cognitivas y se introducen las consecuencias que tienen en la ejecución de tareas de la vida diaria. Más aún, se investiga sobre las metodologías actuales de rehabilitación cognitiva. Teniendo en cuenta que las manos son la principal parte del cuerpo para la ejecución de tareas manuales de la vida cotidiana, también se resumen las tecnologías existentes para la captura de movimiento de manos. Una de las principales contribuciones de esta tesis está relacionada con el diseño y evaluación de una solución no invasiva para detectar y seguir las manos durante la ejecución de tareas manuales de la vida cotidiana que a su vez involucran la manipulación de objetos. Esta solución la cual no necesita marcadores adicionales y está basada en una cámara de profundidad de bajo coste, es robusta, precisa y fácil de instalar. Otra contribución presentada se centra en el reconocimiento de gestos para detectar el agarre de objetos basado en un sensor infrarrojo de última generación, y también complementado con una cámara de profundidad. Esta nueva técnica, y también no invasiva, sincroniza ambos sensores para seguir objetos específicos además de reconocer eventos concretos relacionados con tareas de aseo. Más aún, se realiza una evaluación preliminar del reconocimiento de expresiones faciales para analizar si es adecuado para el reconocimiento del estado de ánimo durante la tarea. Por su parte, todos los componentes y algoritmos desarrollados son integrados en un prototipo simple para ser usado como plataforma de monitorización. Se realiza una evaluación técnica del funcionamiento de cada dispositivo para analizar si es adecuada para adquirir datos en tiempo real durante la ejecución de tareas cotidianas reales. Finalmente, se estudia la interacción con pacientes reales para obtener información del nivel de usabilidad del prototipo. Dicha información es esencial y útil para considerar una rehabilitación cognitiva basada en la idea de instalación del sistema en la propia casa del paciente al igual que en el hospital correspondiente. ABSTRACT The use of human motion monitoring techniques usually let researchers to analyse kinematics, especially in motor strategies for goal-oriented activities of daily living, such as the preparation of drinks and food, and even grooming tasks. Additionally, the evaluation of human movements and behaviour in the field of cognitive rehabilitation is essential to deep into the difficulties some people find in common activities after stroke. This difficulties are mainly associated with sequence actions and the recognition of tools usage. The interpretation of attitude data of this kind of patients in order to recognize and determine the level of success of the execution of actions, and to broaden the knowledge in brain diseases, consequences and severity, depends totally on the devices used for the capture of that data and the quality of it. Moreover, there is a real need of improving the current cognitive rehabilitation techniques by contributing to the design of automatic systems to create a kind of virtual therapist for the improvement of the independent life of these stroke patients and to reduce the workload of the occupational therapists currently in charge of them. For this purpose, the use of sensors and devices to obtain real time data of the execution and state of the rehabilitation task is essential to also contribute to the design and training of future smart algorithms which may recognise errors to automatically provide multimodal feedback through different types of cues such as still images, auditory messages or even videos. The technology and solutions currently adopted in the field don't offer a totally robust and effective way for obtaining real time data, on the one hand, because they may influence the patient's movement in case of marker-based platforms which need sensors attached to the skin; and on the other hand, because of the complexity or high cost of implementation, which make difficult the idea of installing a system at the hospital or even patient's home. This thesis presents the research done in the field of user monitoring to provide a step forward in terms of detection, tracking and recognition of hand movements, gestures and face via a non-invasive way which could improve current techniques for cognitive rehabilitation for real time data acquisition of patient's behaviour and execution of the task. In order to understand the importance of the scope of the thesis, initially, a summary of the main cognitive diseases that require for rehabilitation and an introduction of the consequences on the execution of daily tasks are presented. Moreover, research is done about the actual methodology to provide cognitive rehabilitation. Considering that the main body members involved in the completion of a handmade daily task are the hands, the current technologies for human hands movements capture are also highlighted. One of the main contributions of this thesis is related to the design and evaluation of a non-invasive approach to detect and track user's hands during the execution of handmade activities of daily living which involve the manipulation of objects. This approach does not need the inclusion of any additional markers. In addition, it is only based on a low-cost depth camera, it is robust, accurate and easy to install. Another contribution presented is focused on the hand gesture recognition for detecting object grasping based on a brand new infrared sensor, and also complemented with a depth camera. This new, and also non-invasive, solution which synchronizes both sensors to track specific tools as well as recognize specific events related to grooming is evaluated. Moreover, a preliminary assessment of the recognition of facial expressions is carried out to analyse if it is adequate for recognizing mood during the execution of task. Meanwhile, all the corresponding hardware and software developed are integrated in a simple prototype with the purpose of being used as a platform for monitoring the execution of the rehabilitation task. Technical evaluation of the performance of each device is carried out in order to analyze its suitability to acquire real time data during the execution of real daily tasks. Finally, a kind of healthcare evaluation is also presented to obtain feedback about the usability of the system proposed paying special attention to the interaction with real users and stroke patients. This feedback is quite useful to consider the idea of a home-based cognitive rehabilitation as well as a possible hospital installation of the prototype.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sin duda, el rostro humano ofrece mucha más información de la que pensamos. La cara transmite sin nuestro consentimiento señales no verbales, a partir de las interacciones faciales, que dejan al descubierto nuestro estado afectivo, actividad cognitiva, personalidad y enfermedades. Estudios recientes [OFT14, TODMS15] demuestran que muchas de nuestras decisiones sociales e interpersonales derivan de un previo análisis facial de la cara que nos permite establecer si esa persona es confiable, trabajadora, inteligente, etc. Esta interpretación, propensa a errores, deriva de la capacidad innata de los seres humanas de encontrar estas señales e interpretarlas. Esta capacidad es motivo de estudio, con un especial interés en desarrollar métodos que tengan la habilidad de calcular de manera automática estas señales o atributos asociados a la cara. Así, el interés por la estimación de atributos faciales ha crecido rápidamente en los últimos años por las diversas aplicaciones en que estos métodos pueden ser utilizados: marketing dirigido, sistemas de seguridad, interacción hombre-máquina, etc. Sin embargo, éstos están lejos de ser perfectos y robustos en cualquier dominio de problemas. La principal dificultad encontrada es causada por la alta variabilidad intra-clase debida a los cambios en la condición de la imagen: cambios de iluminación, oclusiones, expresiones faciales, edad, género, etnia, etc.; encontradas frecuentemente en imágenes adquiridas en entornos no controlados. Este de trabajo de investigación estudia técnicas de análisis de imágenes para estimar atributos faciales como el género, la edad y la postura, empleando métodos lineales y explotando las dependencias estadísticas entre estos atributos. Adicionalmente, nuestra propuesta se centrará en la construcción de estimadores que tengan una fuerte relación entre rendimiento y coste computacional. Con respecto a éste último punto, estudiamos un conjunto de estrategias para la clasificación de género y las comparamos con una propuesta basada en un clasificador Bayesiano y una adecuada extracción de características. Analizamos en profundidad el motivo de porqué las técnicas lineales no han logrado resultados competitivos hasta la fecha y mostramos cómo obtener rendimientos similares a las mejores técnicas no-lineales. Se propone un segundo algoritmo para la estimación de edad, basado en un regresor K-NN y una adecuada selección de características tal como se propuso para la clasificación de género. A partir de los experimentos desarrollados, observamos que el rendimiento de los clasificadores se reduce significativamente si los ´estos han sido entrenados y probados sobre diferentes bases de datos. Hemos encontrado que una de las causas es la existencia de dependencias entre atributos faciales que no han sido consideradas en la construcción de los clasificadores. Nuestro resultados demuestran que la variabilidad intra-clase puede ser reducida cuando se consideran las dependencias estadísticas entre los atributos faciales de el género, la edad y la pose; mejorando el rendimiento de nuestros clasificadores de atributos faciales con un coste computacional pequeño. Abstract Surely the human face provides much more information than we think. The face provides without our consent nonverbal cues from facial interactions that reveal our emotional state, cognitive activity, personality and disease. Recent studies [OFT14, TODMS15] show that many of our social and interpersonal decisions derive from a previous facial analysis that allows us to establish whether that person is trustworthy, hardworking, intelligent, etc. This error-prone interpretation derives from the innate ability of human beings to find and interpret these signals. This capability is being studied, with a special interest in developing methods that have the ability to automatically calculate these signs or attributes associated with the face. Thus, the interest in the estimation of facial attributes has grown rapidly in recent years by the various applications in which these methods can be used: targeted marketing, security systems, human-computer interaction, etc. However, these are far from being perfect and robust in any domain of problems. The main difficulty encountered is caused by the high intra-class variability due to changes in the condition of the image: lighting changes, occlusions, facial expressions, age, gender, ethnicity, etc.; often found in images acquired in uncontrolled environments. This research work studies image analysis techniques to estimate facial attributes such as gender, age and pose, using linear methods, and exploiting the statistical dependencies between these attributes. In addition, our proposal will focus on the construction of classifiers that have a good balance between performance and computational cost. We studied a set of strategies for gender classification and we compare them with a proposal based on a Bayesian classifier and a suitable feature extraction based on Linear Discriminant Analysis. We study in depth why linear techniques have failed to provide competitive results to date and show how to obtain similar performances to the best non-linear techniques. A second algorithm is proposed for estimating age, which is based on a K-NN regressor and proper selection of features such as those proposed for the classification of gender. From our experiments we note that performance estimates are significantly reduced if they have been trained and tested on different databases. We have found that one of the causes is the existence of dependencies between facial features that have not been considered in the construction of classifiers. Our results demonstrate that intra-class variability can be reduced when considering the statistical dependencies between facial attributes gender, age and pose, thus improving the performance of our classifiers with a reduced computational cost.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La Realidad Aumentada forma parte de múltiples proyectos de investigación desde hace varios años. La unión de la información del mundo real y la información digital ofrece un sinfín de posibilidades. Las más conocidas van orientadas a los juegos pero, gracias a ello, también se pueden implementar Interfaces Naturales. En otras palabras, conseguir que el usuario maneje un dispositivo electrónico con sus propias acciones: movimiento corporal, expresiones faciales, etc. El presente proyecto muestra el desarrollo de la capa de sistema de una Interfaz Natural, Mokey, que permite la simulación de un teclado mediante movimientos corporales del usuario. Con esto, se consigue que cualquier aplicación de un ordenador que requiera el uso de un teclado, pueda ser usada con movimientos corporales, aunque en el momento de su creación no fuese diseñada para ello. La capa de usuario de Mokey es tratada en el proyecto realizado por Carlos Lázaro Basanta. El principal objetivo de Mokey es facilitar el acceso de una tecnología tan presente en la vida de las personas como es el ordenador a los sectores de la población que tienen alguna discapacidad motora o movilidad reducida. Ya que vivimos en una sociedad tan informatizada, es esencial que, si se quiere hablar de inclusión social, se permita el acceso de la actual tecnología a esta parte de la población y no crear nuevas herramientas exclusivas para ellos, que generarían una situación de discriminación, aunque esta no sea intencionada. Debido a esto, es esencial que el diseño de Mokey sea simple e intuitivo, y al mismo tiempo que esté dotado de la suficiente versatilidad, para que el mayor número de personas discapacitadas puedan encontrar una configuración óptima para ellos. En el presente documento, tras exponer las motivaciones de este proyecto, se va a hacer un análisis detallado del estado del arte, tanto de la tecnología directamente implicada, como de otros proyectos similares. Se va prestar especial atención a la cámara Microsoft Kinect, ya que es el hardware que permite a Mokey detectar la captación de movimiento. Tras esto, se va a proceder a una explicación detallada de la Interfaz Natural desarrollada. Se va a prestar especial atención a todos aquellos algoritmos que han sido implementados para la detección del movimiento, así como para la simulación del teclado. Finalmente, se va realizar un análisis exhaustivo del funcionamiento de Mokey con otras aplicaciones. Se va a someter a una batería de pruebas muy amplia que permita determinar su rendimiento en las situaciones más comunes. Del mismo modo, se someterá a otra batería de pruebas destinada a definir su compatibilidad con los diferentes tipos de programas existentes en el mercado. Para una mayor precisión a la hora de analizar los datos, se va a proceder a comparar Mokey con otra herramienta similar, FAAST, pudiendo observar de esta forma las ventajas que tiene una aplicación especialmente pensada para gente discapacitada sobre otra que no tenía este fin. ABSTRACT. During the last few years, Augmented Reality has been an important part of several research projects, as the combination of the real world and the digital information offers a whole new set of possibilities. Among them, one of the most well-known possibilities are related to games by implementing Natural Interfaces, which main objective is to enable the user to handle an electronic device with their own actions, such as corporal movements, facial expressions… The present project shows the development of Mokey, a Natural Interface that simulates a keyboard by user’s corporal movements. Hence, any application that requires the use of a keyboard can be handled with this Natural Interface, even if the application was not designed in that way at the beginning. The main objective of Mokey is to simplify the use of the computer for those people that are handicapped or have some kind of reduced mobility. As our society has been almost completely digitalized, this kind of interfaces are essential to avoid social exclusion and discrimination, even when it is not intentional. Thus, some of the most important requirements of Mokey are its simplicity to use, as well as its versatility. In that way, the number of people that can find an optimal configuration for their particular condition will grow exponentially. After stating the motivations of this project, the present document will provide a detailed state of the art of both the technologies applied and other similar projects, highlighting the Microsoft Kinect camera, as this hardware allows Mokey to detect movements. After that, the document will describe the Natural Interface that has been developed, paying special attention to the algorithms that have been implemented to detect movements and synchronize the keyboard. Finally, the document will provide an exhaustive analysis of Mokey’s functioning with other applications by checking its behavior with a wide set of tests, so as to determine its performance in the most common situations. Likewise, the interface will be checked against another set of tests that will define its compatibility with different softwares that already exist on the market. In order to have better accuracy while analyzing the data, Mokey’s interface will be compared with a similar tool, FAAST, so as to highlight the advantages of designing an application that is specially thought for disabled people.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En los últimos tiempos los esfuerzos se están centrando en mejorar los métodos de interacción entre el humano y el robot. El objetivo es conseguir que esa relación parezca simple y que se produzca de la manera más natural posible entre el humano y el robot. Para ese fin se está investigando en métodos de reconocimiento e interpretación del lenguaje corporal, gestos, expresiones de la cara y de sonidos que emite el humano para que la máquina se de cuenta de las intenciones y deseos de los humanos sin recibir órdenes muy específicas. Por otro lado interesa saber cómo se podría aplicar estas técnicas a la comunicación entre robots, pensando aquí en grupos de robots que trabajan en equipos realizando tareas ya asignadas. Estas máquinas se tienen que comunicar para entender las situaciones, detectar necesidades puntuales (si una máquina falla y necesita refuerzo, si pasan acontecimientos inesperados) y reaccionar a ello. Ejecutar estas tareas y realizar las comunicaciones para desarrollar las tareas entre las máquinas resultan especialmente difíciles en entornos hostiles, p.ej. debajo del agua, por lo que el objetivo de este proyecto fin de carrera es investigar las posibles aplicaciones de las técnicas de comunicación entre humanos y máquinas a grupos de robots, como refuerzo o sustitución de los métodos de comunicación clásicos. ABSTRACT. During the last years, many efforts are made to improve the interaction between humans and robots. The aim is to make this relationship simpler and the most natural as possible. For these purpose investigations on the recognition and interpretation of body language, gestures, facial expressions etc are carried out, in order to understand human intentions and desires without receiving specific orders. On the other hand, it is of interest investigate how these techniques could be applied to the communication among robots themselves, e.g. groups of robots which are working in teams resolving certain tasks. These machines have to communicate in order to understand the situations, detect punctual necessities and react to them (e.g. if a machines fails and needs some support, or when some unexpected event happens). The execution of certain tasks and the involved communication, happen to be especially hard in hostile environments, i.e. under water. The objective of this final thesis is to investigate the possible applications of the communication techniques between human and machines to groups of robots, as reinforcement or substitution for the classical communication methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introdução: O objetivo do estudo foi investigar se há associação entre déficits na capacidade de reconhecimento de emoções faciais e déficits na flexibilidade mental e na adequação social em pacientes com Transtorno Bipolar do tipo I eutímicos quando comparados a sujeitos controles sem transtorno mental. Métodos: 65 pacientes com Transtorno Bipolar do tipo I eutímicos e 95 controles sem transtorno mental, foram avaliados no reconhecimento de emoções faciais, na flexibilidade mental e na adequação social através de avaliações clínicas e neuropsicológicas. Os sintomas afetivos foram avaliados através da Escala de Depressão de Hamilton e da Escala de Mania de Young, o reconhecimento de emoções faciais através da Facial Expressions of Emotion: Stimuli and Tests, a flexibilidade mental avaliada através do Wisconsin Card Sorting Test e a adequação social através da Escala de Auto- Avaliação de Adequação Social. Resultados: Pacientes com Transtorno Bipolar do tipo I eutímicos apresentam uma associação de maior intensidade comparativamente aos controles entre o reconhecimento de emoções faciais e a flexibilidade mental, indicando que quanto mais preservada a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais Neste grupo às correlações de todas as emoções são positivas com o total de acertos e as categorias e são negativas com as respostas perseverativas, total de erros, erros perseverativos e erros não perseverativos. Não houve uma correlação entre o reconhecimento de emoções faciais e a adequação social, apesar dos pacientes com Transtorno Bipolar do tipo I eutímicos apresentar uma pior adequação social, sinalizando que a pior adequação social não parece ser devida a uma dificuldade em reconhecer e interpretar adequadamente as expressões faciais. Os pacientes com Transtorno Bipolar do tipo I eutímicos não apresentam diferenças significativas no reconhecimento de emoções faciais em relação aos controles, entretanto no subteste surpresa (p=0,080) as diferenças estão no limite da significância estatística, indicando que portadores de transtorno bipolar do tipo I eutímicos tendem a apresentar um pior desempenho no reconhecimento da emoção surpresa em relação aos controles. Conclusão: Nossos resultados reforçam a hipótese de que existe uma associação entre o reconhecimento de emoções faciais e a preservação do funcionamento executivo, mais precisamente a flexibilidade mental, indicando que quanto maior a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais e melhorar o desempenho funcional do paciente. Pacientes bipolares do tipo I eutímicos apresentam uma pior adequação social quando comparados aos controles, o que pode ser uma consequência do Transtorno Bipolar que ratifica a necessidade de uma intervenção terapêutica rápida e eficaz nestes pacientes

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The investigation of biologically initiated pathways to psychological disorder is critical to advance our understanding of mental illness. Research has suggested that attention bias to emotion may be an intermediate trait for depression associated with biologically plausible candidate genes, such as the serotonin transporter (5-HTTLPR) and catechol-o-methyl-transferase (COMT) genes, yet there have been mixed findings in regards to the precise direction of effects. The experience of recent stressful life events (SLEs) may be an important, yet currently unstudied, moderator of the relationship between genes and attention bias as SLEs have been associated with both gene expression and attention to emotion. Additionally, although attention biases to emotion have been studied as a possible intermediate trait associated with depression, no study has examined whether attention biases within the context of measured genetic risk lead to increased risk for clinical depressive episodes over time. Therefore, this research investigated both whether SLEs moderate the link between genetic risk (5-HTTLPR and COMT) and attention bias to emotion and whether 5-HTTLPR and COMT moderated the relationship between attention biases to emotional faces and clinical depression onset prospectively across 18 months within a large community sample of youth (n= 467). Analyses revealed a differential effect of gene. Youth who were homozygous for the low expressing allele of 5-HTTLPR (S/S) and had experienced more recent SLEs within the last three months demonstrated preferential attention toward negative emotional faces (angry and sad). However, youth who were homozygous for the high expressing COMT genotype (Val/Val) and had experienced more recent SLEs showed attentional avoidance of positive facial expressions (happy). Additionally, youth who avoided negative emotion (i.e., anger) and were homozygous for the S allele of the 5-HTTLPR gene were at greater risk for prospective depressive episode onset. Increased risk for depression onset was specific to the 5-HTTLPR gene and was not found when examining moderation by COMT. These findings highlight the importance of examining risk for depression across multiple levels of analysis, such as combined genetic, environmental, and cognitive risk, and is the first study to demonstrate clear evidence of attention biases to emotion functioning as an intermediate trait predicting depression.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research investigating anxiety-related attentional bias for emotional information in anxious and nonanxious children has been equivocal with regard to whether a bias for fear-related stimuli is unique to anxious children or is common to children in general. Moreover, recent cognitive theories have proposed that an attentional bias for objectively threatening stimuli may be common to all individuals, with this effect enhanced in anxious individuals. The current study investigated whether an attentional bias toward fear-related pictures could be found in nonselected children (n = 105) and adults (n = 47) and whether a sample of clinically anxious children (n = 23) displayed an attentional bias for fear-related pictures over and above that expected for nonselected children. Participants completed a dot-probe task that employed fear-related, neutral, and pleasant pictures. As expected, both adults and children showed a stronger attentional bias toward fear-related pictures than toward pleasant pictures. Consistent with some findings in the childhood domain, the extent of the attentional bias toward fear-related pictures did not differ significantly between anxious children and nonselected children. However, compared with nonselected children, anxious children showed a stronger attentional bias overall toward affective picture stimuli. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The observation that snakes and spiders are found faster among flowers and mushrooms than vice versa and that this search advantage is independent of set size supports the notion that fear-relevant stimuli are processed preferentially in a dedicated fear module. Experiment I replicated the faster identification of snakes and spiders but also found a set size effect in a blocked, but not in a mixed-trial, sequence. Experiment 2 failed to find faster identification of snake and spider deviants relative to other animals among flowers and mushrooms and provided evidence for a search advantage for pictures of animals, irrespective of their fear relevance. These findings suggest that results from the present visual search task cannot support the notion of preferential processing of fear relevance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We assessed teacher responses to the communicative attempts of children with autism. Teachers were first interviewed using the Inventory of Potential Communicative Acts (IPCA) to identify behaviors in each child's repertoire that the teachers considered to be communicative. Interview results suggested that the teachers interpreted many of the children's prelinguistic gestures, body movements, and facial expressions, as forms of communication. Naturalistic observations were then conducted in the child's classroom to determine how teachers responded to the children's identified forms of prelinguistic behaviors. The results of these naturalistic observations suggested that the teachers often did not respond to the child's prelinguistic behaviors in ways that acknowledged their communicative intent. Implications of the results on the child's communication development and for intervention efforts are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Attentional bias to fear-relevant animals was assessed in 69 participants not preselected on self-reported anxiety with the use of a dot probe task showing pictures of snakes, spiders, mushrooms, and flowers. Probes that replaced the fear-relevant stimuli (snakes and spiders) were found faster than probes that replaced the non-fear-relevant stimuli, indicating an attentional bias in the entire sample. The bias was not correlated with self-reported state or trait anxiety or with general fearfulness. Participants reporting higher levels of spider fear showed an enhanced bias to spiders, but the bias remained significant in low scorers. The bias to snake pictures was not related to snake fear and was significant in high and low scorers. These results indicate preferential processing of fear-relevant stimuli in an unselected sample.