885 resultados para Asimetría facial


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facial activity is strikingly visible in infants reacting to noxious events. Two measures that reduce this activity to composite events, the Neonatal Facial Coding System (NFCS) and the Facial Action Coding System (FACS), were used to examine facial expressions of 56 neonates responding to routine heel lancing for blood sampling purposes. The NFCS focuses upon a limited subset of all possible facial actions that had been identified previously as responsive to painful events, whereas the FACS is a comprehensive system that is inclusive of all facial actions. Descriptions of the facial expressions obtained from the two measurement systems were very similar, supporting the convergent validity of the shorter, more readily applied system. As well, the cluster of facial activity associated with pain in this sample, using either measure, was similar to the cluster of facial activity associated with pain in adults and other newborns, both full-term and preterm, providing construct validity for the position that the face encodes painful distress in infants and adults.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation of pain in neonates is difficult due to their limited means of communication. The aim was to determine whether behavioural reactions of cry and facial activity provoked by an invasive procedure could be discriminated from responses to non-invasive tactile events. Thirty-six healthy full-term infants (mean age 2.2 h) received 3 procedures in counterbalanced order: intramuscular injection, application of triple dye to the umbilical stub, and rubbing thigh with alcohol. Significant effects of procedure were found for total face activity and latency to face movement. A cluster of facial actions comprised of brow bulging, eyes squeezed shut, deepening of the naso-labial furrow and open mouth was associated most frequently with the invasive procedure. Comparisons between the 2 non-invasive procedures showed more facial activity to thigh swabbing and least to application of triple dye to the umbilical cord. Acoustic analysis of cry showed statistically significant differences across procedures only for latency to cry and cry duration for the group as a whole. However, babies who cried to two procedures showed higher pitch and greater intensity to the injection. There were no significant differences in melody, dysphonation, or jitter. Methodological difficulties for investigators in this area were examined, including criteria for the selection of cries for analysis, and the logical and statistical challenges of contrasting cries induced by different conditions when some babies do not always cry. It was concluded that facial expression, in combination with short latency to onset of cry and long duration of first cry cycle typifies reaction to acute invasive procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pain expression in neonates instigated by heel-lance for blood sampling purposes was systematically described using measures of facial expression and cry and compared across sleep/waking states and sex. From gate-control theory it was hypothesized that pain behavior would vary with the ongoing functional state of the infant, rather than solely reflecting tissue insult. Awake-alert but inactive infants responded with the most facial activity, consistent with current views that infants in this state are most receptive to environmental stimulation. Infants in quiet sleep showed the least facial reaction and the longest latency to cry. Fundamental frequency of cry was not related to sleep/waking state. This suggested that findings from the cry literature on qualities of pain cry as a reflection of nervous system 'stress', in unwell newborns, do not generalize directly to healthy infants as a function of state. Sex differences were apparent in speed of response, with boys showing shorter time to cry and to display facial action following heel-lance. The findings of facial action variation across sleep/waking state were interpreted as indicating that the biological and behavioral context of pain events affects behavioral expression, even at the earliest time developmentally, before the opportunity for learned response patterns occurs. Issues raised by the study include the importance of using measurement techniques which are independent of preconceived categories of affective response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background This study aims to examine the relationship between how individuals with intellectual disabilities report their own levels of anger, and the ability of those individuals to recognize emotions. It was hypothesized that increased expression of anger would be linked to lower ability to recognize facial emotional expressions and increased tendency to interpret facial expressions in a hostile or negative manner. It was also hypothesized increased levels of anger may lead to the altered perception of a particular emotion.

Method A cross-sectional survey design was used. Thirty participants completed a test of facial emotion recognition (FER), and a self-report anger inventory (Benson & Ivins 1992) as part of a structured interview.

Results Individuals with higher self-reported anger did not show significantly reduced performance in FER, or interpret facial expressions in a more hostile manner compared with individuals with less self-reported anger. However, they were less accurate in recognizing neutral facial emotions.

Conclusions It is tentatively suggested that individuals with high levels of anger may be likely to perceive emotional content in a neutral facial expression because of their high levels of emotional arousal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article addresses gender differences in laughter and smiling from an evolutionary perspective. Laughter and smiling can be responses to successful display behavior or signals of affiliation amongst conversational partners—differing social and evolutionary agendas mean there are different motivations when interpreting these signals. Two experiments assess perceptions of genuine
and simulated male and female laughter and amusement social signals. Results show male simulation can always be distinguished. Female simulation is more complicated as males seem to distinguish cues of simulation yet judge simulated signals to be genuine. Females judge other female’s genuine signals to have higher levels of simulation. Results highlight the importance of laughter and smiling in human interactions, use of dynamic stimuli, and using multiple methodologies to assess perception.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although visual surveillance has emerged as an effective technolody for public security, privacy has become an issue of great concern in the transmission and distribution of surveillance videos. For example, personal facial images should not be browsed without permission. To cope with this issue, face image scrambling has emerged as a simple solution for privacyrelated applications. Consequently, online facial biometric verification needs to be carried out in the scrambled domain thus bringing a new challenge to face classification. In this paper, we investigate face verification issues in the scrambled domain and propose a novel scheme to handle this challenge. In our proposed method, to make feature extraction from scrambled face images robust, a biased random subspace sampling scheme is applied to construct fuzzy decision trees from randomly selected features, and fuzzy forest decision using fuzzy memberships is then obtained from combining all fuzzy tree decisions. In our experiment, we first estimated the optimal parameters for the construction of the random forest, and then applied the optimized model to the benchmark tests using three publically available face datasets. The experimental results validated that our proposed scheme can robustly cope with the challenging tests in the scrambled domain, and achieved an improved accuracy over all tests, making our method a promising candidate for the emerging privacy-related facial biometric applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trachoma is the leading infectious cause of blindness worldwide, and epidemiologic studies of factors that may increase the transmission of ocular Chlamydia trachomatis are needed. In two villages in a hyperendemic area of Central Tanzania, 472 (90%) of 527 preschool-aged children were examined for specific signs of unclean faces and presence of trachoma. The odds of trachoma were 70% higher in children with flies and nasal discharge on their faces. Other facial signs were not important. In large families, the odds of trachoma increased 4.8-fold if a sibling had trachoma and 6.8-fold if a sibling had trachoma and an unclean face. Health education strategies aimed at improving face washing need to target cleaning nasal discharge and keeping flies off children's faces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dra. Rose-Marie Mariaca Fellmann, Psicoanalista

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Face detection and recognition should be complemented by recognition of facial expression, for example for social robots which must react to human emotions. Our framework is based on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth are grouped for face detection [1]; lines and edges provide information for face recognition [2].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extraction. Simple, complex and end-stopped cells provide input for line, edge and keypoint detection. Detected events provide a rich, multi-scale object representation, and this representation can be stored in memory in order to identify objects. In this paper, the above context is applied to face recognition. The multi-scale line/edge representation is explored in conjunction with keypoint-based saliency maps for Focus-of-Attention. Recognition rates of up to 96% were achieved by combining frontal and 3/4 views, and recognition was quite robust against partial occlusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vivemos cada vez mais numa era de crescentes avanços tecnológicos em diversas áreas. O que há uns anos atrás era considerado como praticamente impossível, em muitos dos casos, já se tornou realidade. Todos usamos tecnologias como, por exemplo, a Internet, Smartphones e GPSs de uma forma natural. Esta proliferação da tecnologia permitiu tanto ao cidadão comum como a organizações a sua utilização de uma forma cada vez mais criativa e simples de utilizar. Além disso, a cada dia que passa surgem novos negócios e startups, o que demonstra o dinamismo que este crescimento veio trazer para a indústria. A presente dissertação incide sobre duas áreas em forte crescimento: Reconhecimento Facial e Business Intelligence (BI), assim como a respetiva combinação das duas com o objetivo de ser criado um novo módulo para um produto já existente. Tratando-se de duas áreas distintas, é primeiramente feito um estudo sobre cada uma delas. A área de Business Intelligence é vocacionada para organizações e trata da recolha de informação sobre o negócio de determinada empresa, seguindo-se de uma posterior análise. A grande finalidade da área de Business Intelligence é servir como forma de apoio ao processo de tomada de decisão por parte dos analistas e gestores destas organizações. O Reconhecimento Facial, por sua vez, encontra-se mais presente na sociedade. Tendo surgido no passado através da ficção científica, cada vez mais empresas implementam esta tecnologia que tem evoluído ao longo dos anos, chegando mesmo a ser usada pelo consumidor final, como por exemplo em Smartphones. As suas aplicações são, portanto, bastante diversas, desde soluções de segurança até simples entretenimento. Para estas duas áreas será assim feito um estudo com base numa pesquisa de publicações de autores da respetiva área. Desde os cenários de utilização, até aspetos mais específicos de cada uma destas áreas, será assim transmitido este conhecimento para o leitor, o que permitirá uma maior compreensão por parte deste nos aspetos relativos ao desenvolvimento da solução. Com o estudo destas duas áreas efetuado, é então feita uma contextualização do problema em relação à área de atuação da empresa e quais as abordagens possíveis. É também descrito todo o processo de análise e conceção, assim como o próprio desenvolvimento numa vertente mais técnica da solução implementada. Por fim, são apresentados alguns exemplos de resultados obtidos já após a implementação da solução.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adults and children can discriminate various emotional expressions, although there is limited research on sensitivity to the differences between posed and genuine expressions. Adults have shown implicit sensitivity to the difference between posed and genuine happy smiles in that they evaluate T-shirts paired with genuine smiles more favorably than T-shirts paired with posed smiles or neutral expressions (Peace, Miles, & Johnston, 2006). Adults also have shown some explicit sensitivity to posed versus genuine expressions; they are more likely to say that a model i?,feeling happy if the expression is genuine than posed. Nonetheless they are duped by posed expressions about 50% of the time (Miles, & Johnston, in press). There has been no published study to date in which researchers report whether children's evaluation of items varies with expression and there is little research investigating children's sensitivity to the veracity of facial expressions. In the present study the same face stimuli were used as in two previous studies (Miles & Johnston, in press; Peace et al., 2006). The first question to be addressed was whether adults and 7-year-olds have a cognitive understanding of the differences between posed and genuine happiness {scenario task). They evaluated the feelings of children who expressed gratitude for a present that they did or did not want. Results indicated that all participants had a fundamental understanding of the difference between real and posed happiness. The second question involved adults' and children's implicit sensitivity to the veracity of posed and genuine smiles. Participants rated and ranked beach balls paired with faces showing posed smiles, genuine smiles, and neutral expressions. Adults ranked.but did not rate beach balls paired with genuine smiles more favorably than beach balls paired with posed smiles. Children did not demonstrate implicit sensitivity as their ratings and rankings of beach balls did not vary with expressions; they did not even rank beach balls paired with genuine expressions higher than beach balls paired with neutral expressions. In the explicit (show/feel) task, faces were presented without the beach balls and participants were first asked whether each face was showing happy and then whether each face wasfeeling happy. There were also two matching trials that presented two faces at once; participants had to indicate which person was actuallyfeeling happy. In the show condition both adults and 7-year-olds were very accurate on genuine and neutral expressions but made some errors on posed smiles. Adults were fooled about 50% of the time by posed smiles in thefeel condition (i.e., they were likely to say that a model posing happy was really feeling happy) and children were even less accurate, although they showed weak sensitivity to posed versus genuine expressions. Future research should test an older age group of children to determine when explicit sensitivity to posed versus genuine facial expressions becomes adult-like and modify the ranking task to explore the influence of facial expressions on object evaluations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.