800 resultados para Face recognition from video
Resumo:
While researchers in computer vision and pattern recognition have worked on automatic techniques for recognizing faces for the last 20 years, most systems specialize on frontal views of the face. We present a face recognizer that works under varying pose, the difficult part of which is to handle face rotations in depth. Building on successful template-based systems, our basic approach is to represent faces with templates from multiple model views that cover different poses from the viewing sphere. Our system has achieved a recognition rate of 98% on a data base of 62 people containing 10 testing and 15 modelling views per person.
Resumo:
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.
Resumo:
This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.
Resumo:
Ear recognition, as a biometric, has several advantages. In particular, ears can be measured remotely and are also relatively static in size and structure for each individual. Unfortunately, at present, good recognition rates require controlled conditions. For commercial use, these systems need to be much more robust. In particular, ears have to be recognized from different angles ( poses), under different lighting conditions, and with different cameras. It must also be possible to distinguish ears from background clutter and identify them when partly occluded by hair, hats, or other objects. The purpose of this paper is to suggest how progress toward such robustness might be achieved through a technique that improves ear registration. The approach focuses on 2-D images, treating the ear as a planar surface that is registered to a gallery using a homography transform calculated from scale-invariant feature-transform feature matches. The feature matches reduce the gallery size and enable a precise ranking using a simple 2-D distance algorithm. Analysis on a range of data sets demonstrates the technique to be robust to background clutter, viewing angles up to +/- 13 degrees, and up to 18% occlusion. In addition, recognition remains accurate with masked ear images as small as 20 x 35 pixels.
Resumo:
Adults' expert face recognition is limited to the kinds of faces they encounter on a daily basis (typically upright human faces of the same race). Adults process own-race faces holistically (Le., as a gestalt) and are exquisitely sensitive to small differences among faces in the spacing of features, the shape of individual features and the outline or contour of the face (Maurer, Le Grand, & Mondloch, 2002), however this expertise does not seem to extend to faces from other races. The goal of the current study was to investigate the extent to which the mechanisms that underlie expert face processing of own-race faces extend to other-race faces. Participants from rural Pennsylvania that had minimal exposure to other-race faces were tested on a battery of tasks. They were tested on a memory task, two measures of holistic processing (the composite task and the part/whole task), two measures of spatial and featural processing (the JanelLing task and the scrambledlblurred faces task) and a test of contour processing (JanelLing task) for both own-and other-race faces. No study to date has tested the same participants on all of these tasks. Participants had minimal experience with other-race faces; they had no Chinese family members, friends or had ever traveled to an Asian country. Results from the memory task did not reveal an other-race effect. In the present study, participants also demonstrated holistic processing of both own- and other-race faces on both the composite task and the part/whole task. These findings contradict previous findings that Caucasian adults process own-race faces more holistically than other-race faces. However participants did demonstrate an own-race advantage for processing the spacing among features, consistent with two recent studies that used different manipulations of spacing cues (Hayward et al. 2007; Rhodes et al. 2006). They also demonstrated an other-race effect for the processing of individual features for the Jane/Ling task (a direct measure of featural processing) consistent with previous findings (Rhodes, Hayward, & Winkler, 2006), but not for the scrambled faces task (an indirect measure offeatural processing). There was no own-race advantage for contour processing. Thus, these results lead to the conclusion that individuals may show less sensitivity to the appearance of individual features and the spacing among them in other-race faces, despite processing other-race faces holistically.
Resumo:
Most face recognition approaches require a prior training where a given distribution of faces is assumed to further predict the identity of test faces. Such an approach may experience difficulty in identifying faces belonging to distributions different from the one provided during the training. A face recognition technique that performs well regardless of training is, therefore, interesting to consider as a basis of more sophisticated methods. In this work, the Census Transform is applied to describe the faces. Based on a scanning window which extracts local histograms of Census Features, we present a method that directly matches face samples. With this simple technique, 97.2% of the faces in the FERET fa/fb test were correctly recognized. Despite being an easy test set, we have found no other approaches in literature regarding straight comparisons of faces with such a performance. Also, a window for further improvement is presented. Among other techniques, we demonstrate how the use of SVMs over the Census Histogram representation can increase the recognition performance.
Resumo:
A depth-based face recognition algorithm specially adapted to high range resolution data acquired by the new Microsoft Kinect 2 sensor is presented. A novel descriptor called Depth Local Quantized Pattern descriptor has been designed to make use of the extended range resolution of the new sensor. This descriptor is a substantial modification of the popular Local Binary Pattern algorithm. One of the main contributions is the introduction of a quantification step, increasing its capacity to distinguish different depth patterns. The proposed descriptor has been used to train and test a Support Vector Machine classifier, which has proven to be able to accurately recognize different people faces from a wide range of poses. In addition, a new depth-based face database acquired by the new Kinect 2 sensor have been created and made public to evaluate the proposed face recognition system.
Resumo:
It has been suggested that the deleterious effect of contrast reversal on visual recognition is unique to faces, not objects. Here we show from priming, supervised category learning, and generalization that there is no such thing as general invariance of recognition of non-face objects against contrast reversal and, likewise, changes in direction of illumination. However, when recognition varies with rendering conditions, invariance may be restored, and effects of continuous learning may be reduced, by providing prior object knowledge from active sensation. Our findings suggest that the degree of contrast invariance achieved reflects functional characteristics of object representations learned in a task-dependent fashion.
Resumo:
This paper presents a new method for human face recognition by utilizing Gabor-based region covariance matrices as face descriptors. Both pixel locations and Gabor coefficients are employed to form the covariance matrices. Experimental results demonstrate the advantages of this proposed method.
Resumo:
In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains.
Resumo:
Whereas previous research has demonstrated that trait ratings of faces at encoding leads to enhanced recognition accuracy as compared to feature ratings, this set of experiments examines whether ratings given after encoding and just prior to recognition influence face recognition accuracy. In Experiment 1 subjects who made feature ratings just prior to recognition were significantly less accurate than subjects who made no ratings or trait ratings. In Experiment 2 ratings were manipulated at both encoding and retrieval. The retrieval effect was smaller and nonsignificant, but a combined probability analysis showed that it was significant when results from both experiments are considered jointly. In a third experiment exposure duration at retrieval, a potentially confounding factor in Experiments 1 and 2, had a nonsignificant effect on recognition accuracy, suggesting that it probably does not explain the results from Experiments 1 and 2. These experiments demonstrate that face recognition accuracy can be influenced by processing instructions at retrieval.
Resumo:
We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.
Resumo:
This paper presents a new method of eye localisation and face segmentation for use in a face recognition system. By using two near infrared light sources, we have shown that the face can be coarsely segmented, and the eyes can be accurately located, increasing the accuracy of the face localisation and improving the overall speed of the system. The system is able to locate both eyes within 25% of the eye-to-eye distance in over 96% of test cases.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.