885 resultados para Asimetría facial
Resumo:
El caso presentado nos plantea una situación de desorden fun- cional (limitación de la función articular), en apertura y laterali- dad, asociado a dolor intenso orofacial. La hipoplasia condílea unilateral se manifiesta como asimetría esqueletal con desplazamiento notable del mentón hacia el lado derecho, y mordida abierta anterior. Previamente se exponen conceptos básicos de crecimiento con- dilar e hipoplasia condilar.
Resumo:
The condylar hyperplasia is an acquired development anomaly, rare, characterized by an excessive and progressive growing, affecting neck, condilar head, body and the mandible bough, provoking an important facial asymmetry. In the article we present a case of male patient, 22-years-old, reclaiming of painful sintomatology in the region of temporomandibular joint and severe facial asymmetry. It was instituted an orthodontic-surgical treatment by means of orthognathic combined surgery and high condilectomy. After six years of post-surgical controlling, the patient is now in a good shape, without recurrence of facial asymmetry and condylar hyperplasia.
Resumo:
Facial asymmetry is a common human characteristic and can occur on many levels, originate of genetic factors, and can be caused by traumas or due to cross bite and/or muscular disability. The aim of this study was to evaluate the relation between posterior crossbite, median line deviation and facial asymmetry. For this study 70 children aged between 3 and 10 years-old were examined and photographed. Using Microsoft Office Power Point 2007, horizontal lines and one vertical line on median line were drawn, to subjectively analyze facial discrepancies. In relation to overjet, the majority of children (78.6%) showed normal relation, followed by high overjet (17.1%), anterior crossbite (4.3%). In relation to overbite, the majority of children (60%) showed normal relation, 27.1% anterior opened bite (negative overbite), and 12.9% showed high overbite. Posterior crossbite was present in 27.1% of children. Among them, 68.4% showed unilateral crossbite on right side, 21.1% bilateral crossbite and 10.5% unilateral crossbite on left side. The relation between posterior crossbite and facial asymmetry, according to Fisher´s Exact Test (p=0.0970), there was no statistically significant association. In relation to median line, the association was statistically significant with posterior crossbite (p=0.0109) and with facial asymmetry (p=0.0310). There was association between posterior crossbite and median line deviation. There was no association between posterior crossbite and facial asymmetry.
Resumo:
Objetivo: Presentar un caso clínico de una paciente con síndrome de Goldenhar en el cual se utilizó, para la planificación de la corrección de su asimetría facial, el software Materialise CMF® (Leuven, Bélgica). Caso clínico: Paciente de sexo femenino de 27 años que padece síndrome de Goldenhar. Se le realizó cirugía ortognática para corregir su asimetría facial. Para el diagnóstico, planificación y simulación quirúrgica del caso se utilizó el software 3D Materialise CMF® (Leuven, Bélgica). Conclusión: El método de planificación tradicional no es preciso, y estas imprecisiones producen una acumulación de errores en todo el proceso. Estos conceptos cobran gran relevancia en los casos de deformidades asimétricas, en donde los métodos bidimensionales son insuficientes, dejando un gran margen para la intuición, para la habilidad del profesional y, por lo tanto, para el error. El método de planificación 3D facilita el estudio, la planificación y la transferencia de lo planificado a la cirugía, minimizando los errores y logrando la máxima precisión.
Resumo:
Faces are complex patterns that often differ in only subtle ways. Face recognition algorithms have difficulty in coping with differences in lighting, cameras, pose, expression, etc. We propose a novel approach for facial recognition based on a new feature extraction method called fractal image-set encoding. This feature extraction method is a specialized fractal image coding technique that makes fractal codes more suitable for object and face recognition. A fractal code of a gray-scale image can be divided in two parts – geometrical parameters and luminance parameters. We show that fractal codes for an image are not unique and that we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters – which are faster to compute. Results on a subset of the XM2VTS database are presented.
Resumo:
This paper describes a novel framework for facial expression recognition from still images by selecting, optimizing and fusing ‘salient’ Gabor feature layers to recognize six universal facial expressions using the K nearest neighbor classifier. The recognition comparisons with all layer approach using JAFFE and Cohn-Kanade (CK) databases confirm that using ‘salient’ Gabor feature layers with optimized sizes can achieve better recognition performance and dramatically reduce computational time. Moreover, comparisons with the state of the art performances demonstrate the effectiveness of our approach.
Resumo:
Acoustically, vehicles are extremely noisy environments and as a consequence audio-only in-car voice recognition systems perform very poorly. Seeing that the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem. However, implementing such an approach requires a system being able to accurately locate and track the driver’s face and facial features in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using this system, we present our results which show that using the Viola-Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose.
Resumo:
Gabor representations have been widely used in facial analysis (face recognition, face detection and facial expression detection) due to their biological relevance and computational properties. Two popular Gabor representations used in literature are: 1) Log-Gabor and 2) Gabor energy filters. Even though these representations are somewhat similar, they also have distinct differences as the Log-Gabor filters mimic the simple cells in the visual cortex while the Gabor energy filters emulate the complex cells, which causes subtle differences in the responses. In this paper, we analyze the difference between these two Gabor representations and quantify these differences on the task of facial action unit (AU) detection. In our experiments conducted on the Cohn-Kanade dataset, we report an average area underneath the ROC curve (A`) of 92.60% across 17 AUs for the Gabor energy filters, while the Log-Gabor representation achieved an average A` of 96.11%. This result suggests that small spatial differences that the Log-Gabor filters pick up on are more useful for AU detection than the differences in contours and edges that the Gabor energy filters extract.
Resumo:
When classifying a signal, ideally we want our classifier to trigger a large response when it encounters a positive example and have little to no response for all other examples. Unfortunately in practice this does not occur with responses fluctuating, often causing false alarms. There exists a myriad of reasons why this is the case, most notably not incorporating the dynamics of the signal into the classification. In facial expression recognition, this has been highlighted as one major research question. In this paper we present a novel technique which incorporates the dynamics of the signal which can produce a strong response when the peak expression is found and essentially suppresses all other responses as much as possible. We conducted preliminary experiments on the extended Cohn-Kanade (CK+) database which shows its benefits. The ability to automatically and accurately recognize facial expressions of drivers is highly relevant to the automobile. For example, the early recognition of “surprise” could indicate that an accident is about to occur; and various safeguards could immediately be deployed to avoid or minimize injury and damage. In this paper, we conducted initial experiments on the extended Cohn-Kanade (CK+) database which shows its benefits.
Resumo:
In automatic facial expression detection, very accurate registration is desired which can be achieved via a deformable model approach where a dense mesh of 60-70 points on the face is used, such as an active appearance model (AAM). However, for applications where manually labeling frames is prohibitive, AAMs do not work well as they do not generalize well to unseen subjects. As such, a more coarse approach is taken for person-independent facial expression detection, where just a couple of key features (such as face and eyes) are tracked using a Viola-Jones type approach. The tracked image is normally post-processed to encode for shift and illumination invariance using a linear bank of filters. Recently, it was shown that this preprocessing step is of no benefit when close to ideal registration has been obtained. In this paper, we present a system based on the Constrained Local Model (CLM) which is a generic or person-independent face alignment algorithm which gains high accuracy. We show these results against the LBP feature extraction on the CK+ and GEMEP datasets.
Resumo:
Occlusion is a big challenge for facial expression recognition (FER) in real-world situations. Previous FER efforts to address occlusion suffer from loss of appearance features and are largely limited to a few occlusion types and single testing strategy. This paper presents a robust approach for FER in occluded images and addresses these issues. A set of Gabor based templates is extracted from images in the gallery using a Monte Carlo algorithm. These templates are converted into distance features using template matching. The resulting feature vectors are robust to occlusion. Occluded eyes and mouth regions and randomly places occlusion patches are used for testing. Two testing strategies analyze the effects of these occlusions on the overall recognition performance as well as each facial expression. Experimental results on the Cohn-Kanade database confirm the high robustness of our approach and provide useful insights about the effects of occlusion on FER. Performance is also compared with previous approaches.
Resumo:
In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using ‘salient’ distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the ‘salient’ patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. The comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.