991 resultados para facial images
Resumo:
OBJECTIVES To determine the relationship between nasolabial symmetry and esthetics in subjects with orofacial clefts. MATERIAL AND METHODS Eighty-four subjects (mean age 10 years, standard deviation 1.5) with various types of nonsyndromic clefts were included: 11 had unilateral cleft lip (UCL); 30 had unilateral cleft lip and alveolus (UCLA); and 43 had unilateral cleft lip, alveolus, and palate (UCLAP). A 3D stereophotogrammetric image of the face was taken for each subject. Symmetry and esthetics were evaluated on cropped 3D facial images. The degree of asymmetry of the nasolabial area was calculated based on all 3D data points using a surface registration algorithm. Esthetic ratings of various elements of nasal morphology were performed by eight lay raters on a 100 mm visual analog scale. Statistical analysis included ANOVA tests and regression models. RESULTS Nasolabial asymmetry increased with growing severity of the cleft (p = 0.029). Overall, nasolabial appearance was affected by nasolabial asymmetry; subjects with more nasolabial asymmetry were judged as having a less esthetically pleasing nasolabial area (p < 0.001). However, the relationship between nasolabial symmetry and esthetics was relatively weak in subjects with UCLAP, in whom only vermilion border esthetics was associated with asymmetry. CONCLUSIONS Nasolabial symmetry assessed with 3D facial imaging can be used as an objective measure of treatment outcome in subjects with less severe cleft deformity. In subjects with more severe cleft types, other factors may play a decisive role. CLINICAL SIGNIFICANCE Assessment of nasolabial symmetry is a useful measure of treatment success in less severe cleft types.
Resumo:
The relationship between facial shape and attractiveness has been extensively studied, yet few studies have investigated the underlying biological factors of an attractive face. Many researchers have proposed a link between female attractiveness and sex hormones, but there is little empirical evidence in support this assumption. In the present study we investigated the relationship between circulating sex hormones and attractiveness. We created prototypes by separately averaging photographs of 15 women with high and low levels of testosterone, estradiol, and testosterone-to-estradiol ratio levels, respectively. An independent set of facial images was then shape transformed toward these prototypes. We paired the resulting images in such a way that one face depicted a female with high hormone level and the other a low hormone level. Fifty participants were asked to choose the more attractive face of each pair. We found that low testosterone-to-estradiol ratio and low testosterone were positively associated with female facial attractiveness. There was no preference for faces with high estradiol levels. In an additional experiment with 36 participants we confirmed that a low testosterone-to-estradiol ratio plays a larger role than low testosterone alone. These results provide empirical evidence that an attractive female face is shaped by interacting effects of testosterone and estradiol.
Resumo:
OBJECTIVES Assess facial asymmetry in subjects with unilateral cleft lip (UCL), unilateral cleft lip and alveolus (UCLA), and unilateral cleft lip, alveolus, and palate (UCLP), and to evaluate which area of the face is most asymmetrical. METHODS Standardized three-dimensional facial images of 58 patients (9 UCL, 21 UCLA, and 28 UCLP; age range: 8.6-12.3 years) and 121 controls (age range 9-12 years) were mirrored and distance maps were created. Absolute mean asymmetry values were calculated for the whole face, cheek, nose, lips, and chin. One-way analysis of variance, Kruskal-Wallis, and t-test were used to assess the differences between clefts and controls for the whole face and separate areas. RESULTS Clefts and controls differ significantly for the whole face as well as in all areas. Asymmetry is distributed differently over the face for all groups. In UCLA, the nose was significantly more asymmetric compared with chin and cheek (P = 0.038 and 0.024, respectively). For UCL, significant differences in asymmetry between nose and chin and chin and cheek were present (P = 0.038 and 0.046, respectively). In the control group, the chin was the most asymmetric area compared to lip and nose (P = 0.002 and P = 0.001, respectively) followed by the nose (P = 0.004). In UCLP, the nose, followed by the lips, was the most asymmetric area compared to chin, cheek (P < 0.001 and P = 0.016, respectively). LIMITATIONS Despite division into regional areas, the method may still exclude or underrate smaller local areas in the face, which are better visualized in a facial colour coded distance map than quantified by distance numbers. The UCL subsample is small. CONCLUSION Each type of cleft has its own distinct asymmetry pattern. Children with unilateral clefts show more facial asymmetry than children without clefts.
Resumo:
El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan imágenes térmicas en lugar de las clásicas con luz visible. A continuación, se ha implementado una aplicación en lenguaje C++ que sea capaz de reconocer a personas almacenadas en su base de datos leyendo directamente imágenes desde una webcam. Para realizar la aplicación, se ha utilizado una de las librerías más extendidas en cuanto a procesado de imágenes y visión artificial, OpenCV. Como IDE se ha escogido Visual Studio 2010, que cuenta con una versión gratuita para estudiantes. La técnica escogida para implementar la aplicación es la del PCA ya que es una técnica básica en el reconocimiento facial, y además sirve de base para soluciones mucho más complejas. Se han estudiado los fundamentos matemáticos de la técnica para entender cómo procesa la información y en qué se datos se basa para realizar el reconocimiento. Por último, se ha implementado un algoritmo de testeo para poder conocer la fiabilidad de la aplicación con varias bases de datos de imágenes faciales. De esta forma, se puede comprobar los puntos fuertes y débiles del PCA. ABSTRACT. This project deals with one of the most problematic areas of artificial intelligence, facial recognition. Something so simple for human as to recognize a familiar face becomes into complex algorithms and thousands of data processed in seconds. The project begins with a study of the state of the art of various face recognition techniques, from the most used and tested as PCA and LDA, to experimental techniques that use thermal images instead of the classic visible light images. Next, an application has been implemented in C + + language that is able to recognize people stored in a database reading images directly from a webcam. To make the application, it has used one of the most outstretched libraries in terms of image processing and computer vision, OpenCV. Visual Studio 2010 has been chosen as the IDE, which has a free student version. The technique chosen to implement the software is the PCA because it is a basic technique in face recognition, and also provides a basis for more complex solutions. The mathematical foundations of the technique have been studied to understand how it processes the information and which data are used to do the recognition. Finally, an algorithm for testing has been implemented to know the reliability of the application with multiple databases of facial images. In this way, the strengths and weaknesses of the PCA can be checked.
Resumo:
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.
Resumo:
Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes. Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.
Resumo:
Introduction: Human experience takes place in the line of mental-time (MT) created through imagination of oneself in different time-points in past or future (self-projection in time). Here we manipulated self-projection in MT not only with respect to one's life-events but also with respect to one's faces from different past and future time-points. Methods: We here compared MTT with respect to one's facial images from different time points in past and future (study 1: MT-faces) as well as with respect to different past and future life events (study 2: MT-events). Participants were asked to make judgments about past and future face images and past and future events from three different time-points: the present (Now), eight years earlier (Past) or eight years later (Future). In addition, as a control task participants were asked to make recognition judgments with respect to faces and memory-related judgments with respect to events without changing their habitual self-location in time. Behavioral measures and functional magnetic resonance imaging (fMRI) activity after subtraction of recognition and memory related activities show both absolute MT and relative MT effects for faces and events, signifying a fundamental brain mechanism of MT, disentangled from episodic memory functions. Results: Behavioural and event-related fMRI activity showed three independent effects characterized by (1) similarity between past recollection and future imagination, (2) facilitation of judgments related to the future as compared to the past, and (3) facilitation of judgments related to time-points distant from the present. These effects were found with respect to faces and events suggesting that the brain mechanisms of MT are independent of whether actual life episodes have to be re-/pre-experienced and recruited a common cerebral network including the medial-temporal, precuneus, inferior-frontal, temporo-parietal, and insular cortices. Conclusions: These behavioural and neural data suggest that self-projection in time is a crucial aspect of MT, relying on neural structures encoding memory, mental imagery, and self. Furthermore our results emphasize the idea that mental temporal processing is more strongly directed to future prediction than to past recollection.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Motion is an important aspect of face perception that has been largely neglected to date. Many of the established findings are based on studies that use static facial images, which do not reflect the unique temporal dynamics available from seeing a moving face. In the present thesis a set of naturalistic dynamic facial emotional expressions was purposely created and used to investigate the neural structures involved in the perception of dynamic facial expressions of emotion, with both functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG). Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend the distributed neural system for face perception (Haxby et al.,2000). Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as inferior occipital gyri and superior temporal sulci, along with coupling between superior temporal sulci and amygdalae, as well as with inferior frontal gyri. MEG and Synthetic Aperture Magnetometry (SAM) were used to examine the spatiotemporal profile of neurophysiological activity within this dynamic face perception network. SAM analysis revealed a number of regions showing differential activation to dynamic versus static faces in the distributed face network, characterised by decreases in cortical oscillatory power in the beta band, which were spatially coincident with those regions that were previously identified with fMRI. These findings support the presence of a distributed network of cortical regions that mediate the perception of dynamic facial expressions, with the fMRI data providing information on the spatial co-ordinates paralleled by the MEG data, which indicate the temporal dynamics within this network. This integrated multimodal approach offers both excellent spatial and temporal resolution, thereby providing an opportunity to explore dynamic brain activity and connectivity during face processing.
Resumo:
[EN]We investigate mechanisms which can endow the computer with the ability of describing a human face by means of computer vision techniques. This is a necessary requirement in order to develop HCI approaches which make the user feel himself/herself perceived. This paper describes our experiences considering gender, race and the presence of moustache and glasses. This is accomplished comparing, on a set of 6000 facial images, two di erent face representation approaches: Principal Components Analysis (PCA) and Gabor lters. The results achieved using a Support Vector Machine (SVM) based classi er are promising and particularly better for the second representation approach.
Resumo:
[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.
Resumo:
[EN]Most face recognition systems are based on some form of batch learning. Online face recognition is not only more practical, it is also much more biologically plausible. Typical batch learners aim at minimizing both training error and (a measure of) hypothesis complexity. We show that the same minimization can be done incrementally as long as some form of ”scaffolding” is applied throughout the learning process. Scaffolding means: make the system learn from samples that are neither too easy nor too difficult at each step. We note that such learning behavior is also biologically plausible. Experiments using large sequences of facial images support the theoretical claims. The proposed method compares well with other, numerical calculus-based online learners.
Resumo:
[ES]This paper describes an analysis performed for facial description in static images and video streams. The still image context is first analyzed in order to decide the optimal classifier configuration for each problem: gender recognition, race classification, and glasses and moustache presence. These results are later applied to significant samples which are automatically extracted in real-time from video streams achieving promising results in the facial description of 70 individuals by means of gender, race and the presence of glasses and moustache.
Resumo:
The computational power is increasing day by day. Despite that, there are some tasks that are still difficult or even impossible for a computer to perform. For example, while identifying a facial expression is easy for a human, for a computer it is an area in development. To tackle this and similar issues, crowdsourcing has grown as a way to use human computation in a large scale. Crowdsourcing is a novel approach to collect labels in a fast and cheap manner, by sourcing the labels from the crowds. However, these labels lack reliability since annotators are not guaranteed to have any expertise in the field. This fact has led to a new research area where we must create or adapt annotation models to handle these weaklylabeled data. Current techniques explore the annotators’ expertise and the task difficulty as variables that influences labels’ correction. Other specific aspects are also considered by noisy-labels analysis techniques. The main contribution of this thesis is the process to collect reliable crowdsourcing labels for a facial expressions dataset. This process consists in two steps: first, we design our crowdsourcing tasks to collect annotators labels; next, we infer the true label from the collected labels by applying state-of-art crowdsourcing algorithms. At the same time, a facial expression dataset is created, containing 40.000 images and respective labels. At the end, we publish the resulting dataset.
Resumo:
Aquest paper es divideix en 3 parts fonamentals, la primera relata el que pretén mostrar aquest estudi, que és aplicar els sistemes actuals de reconeixement facial en una base de dades d'obres d'art. Explica quins mètodes s'utilitzaran i perquè es interessant realitzar aquest estudi. La segona passa a mostrar el detall de les dades obtingudes en l'experiment, amb imatges i gràfics que facilitaran la comprensió. I en l'última part tenim la discussió dels resultats obtinguts en l'anàlisi i les seves posteriors conclusions.