885 resultados para Asimetría facial
Resumo:
Purpose - The aim of this study was to investigate whether the presence of a whole-face context during facial composite production facilitates construction of facial composite images. Design/Methodology - In Experiment 1, constructors viewed a celebrity face and then developed a facial composite using PRO-fit in one of two conditions: either the full-face was visible while facial features were selected, or only the feature currently being selected was visible. The composites were named by different participants. We then replicated the study using a more forensically-valid procedure: In Experiment 2 non-football fans viewed an image of a premiership footballer and 24 hours later constructed a composite of the face with a trained software operator. The resulting composites were named by football fans. Findings - In both studies we found that presence of the facial context promoted more identifiable facial composite images. Research limitations/implications – Though this study uses current software in an unconventional way, this was necessary to avoid error arising from between-system differences. Practical implications - Results confirm that composite software should have the whole-face context visible to witnesses throughout construction. Though some software systems do this, there remain others that present features in isolation and these findings show that these systems are unlikely to be optimal. Originality/value - This is the first study to demonstrate the importance of a full-face context for the construction of facial composite images. Results are valuable to police forces and developers of composite software.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, my dissertation
Resumo:
Gemstone Team FACE
Resumo:
In this paper we demonstrate a simple and novel illumination model that can be used for illumination invariant facial recognition. This model requires no prior knowledge of the illumination conditions and can be used when there is only a single training image per-person. The proposed illumination model separates the effects of illumination over a small area of the face into two components; an additive component modelling the mean illumination and a multiplicative component, modelling the variance within the facial area. Illumination invariant facial recognition is performed in a piecewise manner, by splitting the face image into blocks, then normalizing the illumination within each block based on the new lighting model. The assumptions underlying this novel lighting model have been verified on the YaleB face database. We show that magnitude 2D Fourier features can be used as robust facial descriptors within the new lighting model. Using only a single training image per-person, our new method achieves high (in most cases 100%) identification accuracy on the YaleB, extended YaleB and CMU-PIE face databases.
Resumo:
The objectives of this study were to: (1). evaluate the validity of the Neonatal Facial Coding System (NFCS) for assessment of postoperative pain and (2). explore whether the number of NFCS facial actions could be reduced for assessing postoperative pain.
Resumo:
Assessment of infant pain is a pressing concern, especially within the context of neonatal intensive care where infants may be exposed to prolonged and repeated pain during lengthy hospitalization. In the present study the feasibility of carrying out the complete Neonatal Facial Coding System (NFCS) in real time at bedside, specifically reliability, construct and concurrent validity, was evaluated in a tertiary level Neonatal Intensive Care Unit (NICU). Heel lance was used as a model of procedural pain, and observed with n = 40 infants at 32 weeks gestational age. Infant sleep/wake state, NFCS facial activity and specific hand movements were coded during baseline, unwrap, swab, heel lance, squeezing and recovery events. Heart rate was recorded continuously and digitally sampled using a custom designed computer system. Repeated measures analysis of variance (ANOVA) showed statistically significant differences across events for facial activity (P <0.0001) and heart rate (P <0.0001). Planned comparisons showed facial activity unchanged during baseline, swab and unwrap, then increased significantly during heel lance (P <0.0001), increased further during squeezing (P <0.003), then decreased during recovery (P <0.0001). Systematic shifts in sleep/wake state were apparent. Rise in facial activity was consistent with increased heart rate, except that facial activity more closely paralleled initiation of the invasive event. Thus facial display was more specific to tissue damage compared with heart rate. Inter-observer reliability was high. Construct validity of the NFCS at bedside was demonstrated as invasive procedures were distinguished from tactile. While bedside coding of behavior does not permit raters to be blind to events, mechanical recording of heart rate allowed for an independent source of concurrent validation for bedside application of the NFCS scale.
Resumo:
Age-related changes in the facial expression of pain during the first 18 months of life have important implications for our understanding of pain and pain assessment. We examined facial reactions video recorded during routine immunization injections in 75 infants stratified into 2-, 4-, 6-, 12-, and 18-month age groups. Two facial coding systems differing in the amount of detail extracted were applied to the records. In addition, parents completed a brief questionnaire that assessed child temperament and provided background information. Parents' efforts to soothe the children also were described. While there were consistencies in facial displays over the age groups, there also were differences on both measures of facial activity, indicating systematic variation in the nature and severity of distress. The least pain was expressed by the 4-month age group. Temperament was not related to the degree of pain expressed. Systematic variations in parental soothing behaviour indicated accommodation to the age of the child. Reasons for the differing patterns of facial activity are examined, with attention paid to the development of inhibitory mechanisms and the role of negative emotions such as anger and anxiety.
Resumo:
Explored the facial and cry characteristics that adults use when judging an infant's pain. Sixteen women viewed videotaped reactions of 36 newborns subjected to noninvasive thigh rubs and vitamin K injections in the course of routine care and rated discomfort. The group mean interrater reliability was high. Detailed descriptions of the infants' facial reactions and cry sounds permitted specification of the determinants of distress judgments. Several facial variables (a brow bulge, eyes squeezed shut, and deepened nasolabial fold constellation, and taut tongue) accounted for 49% of the variance in ratings of affective discomfort after controlling for ratings of discomfort during a noninvasive event. In a separate analysis not including facial activity, several cry variables (formant frequency, latency to cry) also accounted for variance (38%) in ratings. When the facial and cry variables were considered together, cry variables added little to the prediction of ratings in comparison to facial variables. Cry would seem to command attention, but facial activity, rather than cry, can account for the major variations in adults' judgments of neonatal pain.