834 resultados para Recognizing emotional facial expressions
Resumo:
Sin duda, el rostro humano ofrece mucha más información de la que pensamos. La cara transmite sin nuestro consentimiento señales no verbales, a partir de las interacciones faciales, que dejan al descubierto nuestro estado afectivo, actividad cognitiva, personalidad y enfermedades. Estudios recientes [OFT14, TODMS15] demuestran que muchas de nuestras decisiones sociales e interpersonales derivan de un previo análisis facial de la cara que nos permite establecer si esa persona es confiable, trabajadora, inteligente, etc. Esta interpretación, propensa a errores, deriva de la capacidad innata de los seres humanas de encontrar estas señales e interpretarlas. Esta capacidad es motivo de estudio, con un especial interés en desarrollar métodos que tengan la habilidad de calcular de manera automática estas señales o atributos asociados a la cara. Así, el interés por la estimación de atributos faciales ha crecido rápidamente en los últimos años por las diversas aplicaciones en que estos métodos pueden ser utilizados: marketing dirigido, sistemas de seguridad, interacción hombre-máquina, etc. Sin embargo, éstos están lejos de ser perfectos y robustos en cualquier dominio de problemas. La principal dificultad encontrada es causada por la alta variabilidad intra-clase debida a los cambios en la condición de la imagen: cambios de iluminación, oclusiones, expresiones faciales, edad, género, etnia, etc.; encontradas frecuentemente en imágenes adquiridas en entornos no controlados. Este de trabajo de investigación estudia técnicas de análisis de imágenes para estimar atributos faciales como el género, la edad y la postura, empleando métodos lineales y explotando las dependencias estadísticas entre estos atributos. Adicionalmente, nuestra propuesta se centrará en la construcción de estimadores que tengan una fuerte relación entre rendimiento y coste computacional. Con respecto a éste último punto, estudiamos un conjunto de estrategias para la clasificación de género y las comparamos con una propuesta basada en un clasificador Bayesiano y una adecuada extracción de características. Analizamos en profundidad el motivo de porqué las técnicas lineales no han logrado resultados competitivos hasta la fecha y mostramos cómo obtener rendimientos similares a las mejores técnicas no-lineales. Se propone un segundo algoritmo para la estimación de edad, basado en un regresor K-NN y una adecuada selección de características tal como se propuso para la clasificación de género. A partir de los experimentos desarrollados, observamos que el rendimiento de los clasificadores se reduce significativamente si los ´estos han sido entrenados y probados sobre diferentes bases de datos. Hemos encontrado que una de las causas es la existencia de dependencias entre atributos faciales que no han sido consideradas en la construcción de los clasificadores. Nuestro resultados demuestran que la variabilidad intra-clase puede ser reducida cuando se consideran las dependencias estadísticas entre los atributos faciales de el género, la edad y la pose; mejorando el rendimiento de nuestros clasificadores de atributos faciales con un coste computacional pequeño. Abstract Surely the human face provides much more information than we think. The face provides without our consent nonverbal cues from facial interactions that reveal our emotional state, cognitive activity, personality and disease. Recent studies [OFT14, TODMS15] show that many of our social and interpersonal decisions derive from a previous facial analysis that allows us to establish whether that person is trustworthy, hardworking, intelligent, etc. This error-prone interpretation derives from the innate ability of human beings to find and interpret these signals. This capability is being studied, with a special interest in developing methods that have the ability to automatically calculate these signs or attributes associated with the face. Thus, the interest in the estimation of facial attributes has grown rapidly in recent years by the various applications in which these methods can be used: targeted marketing, security systems, human-computer interaction, etc. However, these are far from being perfect and robust in any domain of problems. The main difficulty encountered is caused by the high intra-class variability due to changes in the condition of the image: lighting changes, occlusions, facial expressions, age, gender, ethnicity, etc.; often found in images acquired in uncontrolled environments. This research work studies image analysis techniques to estimate facial attributes such as gender, age and pose, using linear methods, and exploiting the statistical dependencies between these attributes. In addition, our proposal will focus on the construction of classifiers that have a good balance between performance and computational cost. We studied a set of strategies for gender classification and we compare them with a proposal based on a Bayesian classifier and a suitable feature extraction based on Linear Discriminant Analysis. We study in depth why linear techniques have failed to provide competitive results to date and show how to obtain similar performances to the best non-linear techniques. A second algorithm is proposed for estimating age, which is based on a K-NN regressor and proper selection of features such as those proposed for the classification of gender. From our experiments we note that performance estimates are significantly reduced if they have been trained and tested on different databases. We have found that one of the causes is the existence of dependencies between facial features that have not been considered in the construction of classifiers. Our results demonstrate that intra-class variability can be reduced when considering the statistical dependencies between facial attributes gender, age and pose, thus improving the performance of our classifiers with a reduced computational cost.
Resumo:
Introdução: O objetivo do estudo foi investigar se há associação entre déficits na capacidade de reconhecimento de emoções faciais e déficits na flexibilidade mental e na adequação social em pacientes com Transtorno Bipolar do tipo I eutímicos quando comparados a sujeitos controles sem transtorno mental. Métodos: 65 pacientes com Transtorno Bipolar do tipo I eutímicos e 95 controles sem transtorno mental, foram avaliados no reconhecimento de emoções faciais, na flexibilidade mental e na adequação social através de avaliações clínicas e neuropsicológicas. Os sintomas afetivos foram avaliados através da Escala de Depressão de Hamilton e da Escala de Mania de Young, o reconhecimento de emoções faciais através da Facial Expressions of Emotion: Stimuli and Tests, a flexibilidade mental avaliada através do Wisconsin Card Sorting Test e a adequação social através da Escala de Auto- Avaliação de Adequação Social. Resultados: Pacientes com Transtorno Bipolar do tipo I eutímicos apresentam uma associação de maior intensidade comparativamente aos controles entre o reconhecimento de emoções faciais e a flexibilidade mental, indicando que quanto mais preservada a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais Neste grupo às correlações de todas as emoções são positivas com o total de acertos e as categorias e são negativas com as respostas perseverativas, total de erros, erros perseverativos e erros não perseverativos. Não houve uma correlação entre o reconhecimento de emoções faciais e a adequação social, apesar dos pacientes com Transtorno Bipolar do tipo I eutímicos apresentar uma pior adequação social, sinalizando que a pior adequação social não parece ser devida a uma dificuldade em reconhecer e interpretar adequadamente as expressões faciais. Os pacientes com Transtorno Bipolar do tipo I eutímicos não apresentam diferenças significativas no reconhecimento de emoções faciais em relação aos controles, entretanto no subteste surpresa (p=0,080) as diferenças estão no limite da significância estatística, indicando que portadores de transtorno bipolar do tipo I eutímicos tendem a apresentar um pior desempenho no reconhecimento da emoção surpresa em relação aos controles. Conclusão: Nossos resultados reforçam a hipótese de que existe uma associação entre o reconhecimento de emoções faciais e a preservação do funcionamento executivo, mais precisamente a flexibilidade mental, indicando que quanto maior a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais e melhorar o desempenho funcional do paciente. Pacientes bipolares do tipo I eutímicos apresentam uma pior adequação social quando comparados aos controles, o que pode ser uma consequência do Transtorno Bipolar que ratifica a necessidade de uma intervenção terapêutica rápida e eficaz nestes pacientes
Resumo:
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face.
Resumo:
The investigation of biologically initiated pathways to psychological disorder is critical to advance our understanding of mental illness. Research has suggested that attention bias to emotion may be an intermediate trait for depression associated with biologically plausible candidate genes, such as the serotonin transporter (5-HTTLPR) and catechol-o-methyl-transferase (COMT) genes, yet there have been mixed findings in regards to the precise direction of effects. The experience of recent stressful life events (SLEs) may be an important, yet currently unstudied, moderator of the relationship between genes and attention bias as SLEs have been associated with both gene expression and attention to emotion. Additionally, although attention biases to emotion have been studied as a possible intermediate trait associated with depression, no study has examined whether attention biases within the context of measured genetic risk lead to increased risk for clinical depressive episodes over time. Therefore, this research investigated both whether SLEs moderate the link between genetic risk (5-HTTLPR and COMT) and attention bias to emotion and whether 5-HTTLPR and COMT moderated the relationship between attention biases to emotional faces and clinical depression onset prospectively across 18 months within a large community sample of youth (n= 467). Analyses revealed a differential effect of gene. Youth who were homozygous for the low expressing allele of 5-HTTLPR (S/S) and had experienced more recent SLEs within the last three months demonstrated preferential attention toward negative emotional faces (angry and sad). However, youth who were homozygous for the high expressing COMT genotype (Val/Val) and had experienced more recent SLEs showed attentional avoidance of positive facial expressions (happy). Additionally, youth who avoided negative emotion (i.e., anger) and were homozygous for the S allele of the 5-HTTLPR gene were at greater risk for prospective depressive episode onset. Increased risk for depression onset was specific to the 5-HTTLPR gene and was not found when examining moderation by COMT. These findings highlight the importance of examining risk for depression across multiple levels of analysis, such as combined genetic, environmental, and cognitive risk, and is the first study to demonstrate clear evidence of attention biases to emotion functioning as an intermediate trait predicting depression.
Resumo:
Research investigating anxiety-related attentional bias for emotional information in anxious and nonanxious children has been equivocal with regard to whether a bias for fear-related stimuli is unique to anxious children or is common to children in general. Moreover, recent cognitive theories have proposed that an attentional bias for objectively threatening stimuli may be common to all individuals, with this effect enhanced in anxious individuals. The current study investigated whether an attentional bias toward fear-related pictures could be found in nonselected children (n = 105) and adults (n = 47) and whether a sample of clinically anxious children (n = 23) displayed an attentional bias for fear-related pictures over and above that expected for nonselected children. Participants completed a dot-probe task that employed fear-related, neutral, and pleasant pictures. As expected, both adults and children showed a stronger attentional bias toward fear-related pictures than toward pleasant pictures. Consistent with some findings in the childhood domain, the extent of the attentional bias toward fear-related pictures did not differ significantly between anxious children and nonselected children. However, compared with nonselected children, anxious children showed a stronger attentional bias overall toward affective picture stimuli. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Anxiety and fear are often confounded in discussions of human emotions. However, studies of rodent defensive reactions under naturalistic conditions suggest anxiety is functionally distinct from fear. Unambiguous threats, such as predators, elicit flight from rodents (if an escape-route is available), whereas ambiguous threats (e.g., the odor of a predator) elicit risk assessment behavior, which is associated with anxiety as it is preferentially modulated by anti-anxiety drugs. However, without human evidence, it would be premature to assume that rodent-based psychological models are valid for humans. We tested the human validity of the risk assessment explanation for anxiety by presenting 8 volunteers with emotive scenarios and asking them to pose facial expressions. Photographs and videos of these expressions were shown to 40 participants who matched them to the scenarios and labeled each expression. Scenarios describing ambiguous threats were preferentially matched to the facial expression posed in response to the same scenario type. This expression consisted of two plausible environmental-scanning behaviors (eye darts and head swivels) and was labeled as anxiety, not fear. The facial expression elicited by unambiguous threat scenarios was labeled as fear. The emotion labels generated were then presented to another 18 participants who matched them back to photographs of the facial expressions. This back-matching of labels to faces also linked anxiety to the environmental-scanning face rather than fear face. Results therefore suggest that anxiety produces a distinct facial expression and that it has adaptive value in situations that are ambiguously threatening, supporting a functional, risk-assessing explanation for human anxiety.
Resumo:
Various neuroimaging investigations have revealed that perception of emotional pictures is associated with greater visual cortex activity than their neutral counterparts. It has further been proposed that threat-related information is rapidly processed, suggesting that the modulation of visual cortex activity should occur at an early stage. Additional studies have demonstrated that oscillatory activity in the gamma band range (40-100 Hz) is associated with threat processing. Magnetoencephalography (MEG) was used to investigate such activity during perception of task-irrelevant, threat-related versus neutral facial expressions. Our results demonstrated a bilateral reduction in gamma band activity for expressions of threat, specifically anger, compared with neutral faces in extrastriate visual cortex (BA 18) within 50-250 ms of stimulus onset. These results suggest that gamma activity in visual cortex may play a role in affective modulation of visual processing, in particular with the perception of threat cues.
Resumo:
Patients with depersonalization disorder have shown attenuated responses to emotional unpleasant stimuli, hence supporting the view that depersonalization is characterised by a selective inhibition on the processing of unpleasant emotions. It was the purpose of this study to establish if autonomic responses to facial emotional expressions also show the same blunting effect. The skin conductance responses (SCRs) of 16 patients with chronic DSM-IV depersonalization disorder, 15 normal controls and 15 clinical controls with DSM-IV anxiety disorders were recorded in response to facial expressions of happiness and disgust. Patients with anxiety disorders were found to have greater autonomic responses than patients with depersonalization, in spite of the fact that both groups had similarly high levels of subjective anxiety as measured by anxiety scales. SCR to happy faces did not vary across groups. The findings of this study provide further support to the idea that patients with depersonalization have a selective impairment in the processing of threatening or unpleasant emotional stimuli.
Resumo:
Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.
Resumo:
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Resumo:
Ageing of the population is a worldwide phenomenon. Numerous ICT-based solutions have been developed for elderly care but mainly connected to the physiological and nursing aspects in services for the elderly. Social work is a profession that should pay attention to the comprehensive wellbeing and social needs of the elderly. Many people experience loneliness and depression in their old age, either as a result of living alone or due to a lack of close family ties and reduced connections with their culture of origin, which results in an inability to participate actively in community activities (Singh & Misra, 2009). Participation in society would enhance the quality of life. With the development of information technology, the use of technology in social work practice has risen dramatically. The aim of this literature review is to map out the state of the art of knowledge about the usage of ICT in elderly care and to figure out research-based knowledge about the usability of ICT for the prevention of loneliness and social isolation of elderly people. The data for the current research comes from the core collection of the Web of Science and the data searching was performed using Boolean? The searching resulted in 216 published English articles. After going through the topics and abstracts, 34 articles were selected for the data analysis that is based on a multi approach framework. The analysis of the research approach is categorized according to some aspects of using ICT by older adults from the adoption of ICT to the impact of usage, and the social services for them. This literature review focused on the function of communication by excluding the applications that mainly relate to physical nursing. The results show that the so-called ‘digital divide’ still exists, but the older adults have the willingness to learn and utilise ICT in daily life, especially for communication. The data shows that the usage of ICT can prevent the loneliness and social isolation of older adults, and they are eager for technical support in using ICT. The results of data analysis on theoretical frames and concepts show that this research field applies different theoretical frames from various scientific fields, while a social work approach is lacking. However, a synergic frame of applied theories will be suggested from the perspective of social work.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
En dépit des efforts déployés pour diminuer la prévalence de la maltraitance à l’enfance, celle-ci serait associée à des difficultés non négligeables, dont la manifestation d’agression. La réactivité émotionnelle et comportementale, incluant la colère, la peur et l’évitement, est proposée comme mécanisme expliquant la relation unissant la maltraitance à l’agression. Quatre objectifs sont poursuivis à cette fin, soit d’examiner la relation notée entre: (1) la maltraitance et l’agression, (2) la maltraitance et la colère, la peur, ainsi que l’évitement, (3) la colère, la peur, ainsi que l’évitement et l’agression et (4) tester formellement le rôle médiateur et modérateur de la colère, la peur et l’évitement à cette relation. Les données de 160 hommes âgés de 18 à 35 ans ayant été exposés ou non à de la maltraitance ont été colligées par le biais de questionnaires et d’une tâche de provocation sociale permettant de mesurer les expressions faciales de colère et de peur, ainsi que les comportements d’évitement. Les résultats suggèrent que la maltraitance et les comportements d’évitement sont associés à l’agression. La maltraitance ne serait toutefois pas liée à la colère, à la peur et à l’évitement. Alors que les résultats suggèrent que ces indices n’aient pas de rôles médiateurs dans la relation entre la maltraitance et l’agression, la réactivité aux plans de la colère et de l’évitement magnifierait cette relation. Ainsi, les résultats invitent à prendre en compte les expériences de maltraitance et l’intensité de la réactivité émotionnelle et comportementale dans les interventions afin de cibler les individus plus à risque d’avoir recours à l’agression.
Resumo:
En dépit des efforts déployés pour diminuer la prévalence de la maltraitance à l’enfance, celle-ci serait associée à des difficultés non négligeables, dont la manifestation d’agression. La réactivité émotionnelle et comportementale, incluant la colère, la peur et l’évitement, est proposée comme mécanisme expliquant la relation unissant la maltraitance à l’agression. Quatre objectifs sont poursuivis à cette fin, soit d’examiner la relation notée entre: (1) la maltraitance et l’agression, (2) la maltraitance et la colère, la peur, ainsi que l’évitement, (3) la colère, la peur, ainsi que l’évitement et l’agression et (4) tester formellement le rôle médiateur et modérateur de la colère, la peur et l’évitement à cette relation. Les données de 160 hommes âgés de 18 à 35 ans ayant été exposés ou non à de la maltraitance ont été colligées par le biais de questionnaires et d’une tâche de provocation sociale permettant de mesurer les expressions faciales de colère et de peur, ainsi que les comportements d’évitement. Les résultats suggèrent que la maltraitance et les comportements d’évitement sont associés à l’agression. La maltraitance ne serait toutefois pas liée à la colère, à la peur et à l’évitement. Alors que les résultats suggèrent que ces indices n’aient pas de rôles médiateurs dans la relation entre la maltraitance et l’agression, la réactivité aux plans de la colère et de l’évitement magnifierait cette relation. Ainsi, les résultats invitent à prendre en compte les expériences de maltraitance et l’intensité de la réactivité émotionnelle et comportementale dans les interventions afin de cibler les individus plus à risque d’avoir recours à l’agression.
Resumo:
Some decades of research on emotional development have underlined the contribution of several domains to emotional understanding in childhood. Based on this research, Pons and colleagues (Pons & Harris, 2002; Pons, Harris & Rosnay, 2004) have proposed the Test of Emotion Comprehension (TEC) which assesses nine domains of emotional understanding, namely the recognition of emotions, based on facial expressions; the comprehension of external emotional causes; impact of desire on emotions; emotions based on beliefs; memory influence on emotions; possibility of emotional regulation; possibility of hiding an emotional state; having mixed emotions; contribution of morality to emotional experiences. This instrument was administered individually to 182 Portuguese children aged between 8 and 11 years, of 3rd and 4th grades, in public schools. Additionally, we used the Socially in Action-Peers (SAp) (Rocha, Candeias & Lopes da Silva, 2012) to assess TEC’s criterion-related validity. Mean differences results in TEC by gender and by socio-economic status (SES) were analyzed. The results of the TEC’s psychometric analysis were performed in terms of items’ sensitivity and reliability (stability, test-retest). Finally, in order to explore the theoretical structure underlying TEC a Confirmatory Factor Analysis and a Similarity Structure Analysis were computed. Implications of these findings for emotional understanding assessment and intervention in childhood are discussed.