61 resultados para Face biometrics
Resumo:
This paper argues that biometric verification evaluations can obscure vulnerabilities that increase the chances that an attacker could be falsely accepted. This can occur because existing evaluations implicitly assume that an imposter claiming a false identity would claim a random identity rather than consciously selecting a target to impersonate. This paper shows how an attacker can select a target with a similar biometric signature in order to increase their chances of false acceptance. It demonstrates this effect using a publicly available iris recognition algorithm. The evaluation shows that the system can be vulnerable to attackers targeting subjects who are enrolled with a smaller section of iris due to occlusion. The evaluation shows how the traditional DET curve analysis conceals this vulnerability. As a result, traditional analysis underestimates the importance of an existing score normalisation method for addressing occlusion. The paper concludes by evaluating how the targeted false acceptance rate increases with the number of available targets. Consistent with a previous investigation of targeted face verification performance, the experiment shows that the false acceptance rate can be modelled using the traditional FAR measure with an additional term that is proportional to the logarithm of the number of available targets.
Resumo:
When applying biometric algorithms to forensic verification, false acceptance and false rejection can mean a failure to identify a criminal, or worse, lead to the prosecution of individuals for crimes they did not commit. It is therefore critical that biometric evaluations be performed as accurately as possible to determine their legitimacy as a forensic tool. This paper argues that, for forensic verification scenarios, traditional performance measures are insufficiently accurate. This inaccuracy occurs because existing verification evaluations implicitly assume that an imposter claiming a false identity would claim a random identity rather than consciously selecting a target to impersonate. In addition to describing this new vulnerability, the paper describes a novel Targeted.. FAR metric that combines the traditional False Acceptance Rate (FAR) measure with a term that indicates how performance degrades with the number of potential targets. The paper includes an evaluation of the effects of targeted impersonation on an existing academic face verification system. This evaluation reveals that even with a relatively small number of targets false acceptance rates can increase significantly, making the analysed biometric systems unreliable.
Resumo:
Ear recognition, as a biometric, has several advantages. In particular, ears can be measured remotely and are also relatively static in size and structure for each individual. Unfortunately, at present, good recognition rates require controlled conditions. For commercial use, these systems need to be much more robust. In particular, ears have to be recognized from different angles ( poses), under different lighting conditions, and with different cameras. It must also be possible to distinguish ears from background clutter and identify them when partly occluded by hair, hats, or other objects. The purpose of this paper is to suggest how progress toward such robustness might be achieved through a technique that improves ear registration. The approach focuses on 2-D images, treating the ear as a planar surface that is registered to a gallery using a homography transform calculated from scale-invariant feature-transform feature matches. The feature matches reduce the gallery size and enable a precise ranking using a simple 2-D distance algorithm. Analysis on a range of data sets demonstrates the technique to be robust to background clutter, viewing angles up to +/- 13 degrees, and up to 18% occlusion. In addition, recognition remains accurate with masked ear images as small as 20 x 35 pixels.
Resumo:
Recent work suggests that the human ear varies significantly between different subjects and can be used for identification. In principle, therefore, using ears in addition to the face within a recognition system could improve accuracy and robustness, particularly for non-frontal views. The paper describes work that investigates this hypothesis using an approach based on the construction of a 3D morphable model of the head and ear. One issue with creating a model that includes the ear is that existing training datasets contain noise and partial occlusion. Rather than exclude these regions manually, a classifier has been developed which automates this process. When combined with a robust registration algorithm the resulting system enables full head morphable models to be constructed efficiently using less constrained datasets. The algorithm has been evaluated using registration consistency, model coverage and minimalism metrics, which together demonstrate the accuracy of the approach. To make it easier to build on this work, the source code has been made available online.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
The Muslim Brotherhood is the most significant and enduring Sunni Islamist organization of the contemporary era. Its roots lie in the Middle East but it has grown into both a local and global movement, with its well-placed branches reacting effectively to take the opportunities for power and electoral competition offered by the Arab Spring.
Regarded by some as a force of moderation among Islamists, and by others as a façade hiding a terrorist fundamentalist threat, the potential influence of the Muslim Brotherhood on Middle Eastern politics remains ambiguous.The Muslim Brotherhood: The Arab Spring and its Future Face provides an essential insight into the organisation, with chapters devoted to specific cases where the Brotherhood has important impacts on society, the state and politics. Key themes associated with the Brotherhood, such as democracy, equality, pan-Islamism, radicalism, reform, the Palestine issue and gender, are assessed to reveal an evolutionary trend within the movement since its founding in Egypt in 1928 to its manifestation as the largest Sunni Islamist movement in the Middle East in the 21st century. The book addresses the possible future of the Muslim Brotherhood; whether it can surprise sceptics and effectively accommodate democracy and secular trends, and how its ascension to power through the ballot box might influence Western policy debates on their engagement with this manifestation of political Islam.
Drawing on a wide range of sources, this book presents a comprehensive study of a newly resurgent movement and is a valuable resource for students, scholars and policy makers focused on Middle Eastern Politics.
Resumo:
This piece of writing is an excerpt from a keynote talk given at the Symposium on Artistic Research in Borås, Sweden, on 28 November 2014.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.
Resumo:
In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK.
Resumo:
This paper investigated using lip movements as a behavioural biometric for person authentication. The system was trained, evaluated and tested using the XM2VTS dataset, following the Lausanne Protocol configuration II. Features were selected from the DCT coefficients of the greyscale lip image. This paper investigated the number of DCT coefficients selected, the selection process, and static and dynamic feature combinations. Using a Gaussian Mixture Model - Universal Background Model framework an Equal Error Rate of 2.20% was achieved during evaluation and on an unseen test set a False Acceptance Rate of 1.7% and False Rejection Rate of 3.0% was achieved. This compares favourably with face authentication results on the same dataset whilst not being susceptible to spoofing attacks.
Resumo:
With the rapid development of internet-of-things (IoT), face scrambling has been proposed for privacy protection during IoT-targeted image/video distribution. Consequently in these IoT applications, biometric verification needs to be carried out in the scrambled domain, presenting significant challenges in face recognition. Since face models become chaotic signals after scrambling/encryption, a typical solution is to utilize traditional data-driven face recognition algorithms. While chaotic pattern recognition is still a challenging task, in this paper we propose a new ensemble approach – Many-Kernel Random Discriminant Analysis (MK-RDA) to discover discriminative patterns from chaotic signals. We also incorporate a salience-aware strategy into the proposed ensemble method to handle chaotic facial patterns in the scrambled domain, where random selections of features are made on semantic components via salience modelling. In our experiments, the proposed MK-RDA was tested rigorously on three human face datasets: the ORL face dataset, the PIE face dataset and the PUBFIG wild face dataset. The experimental results successfully demonstrate that the proposed scheme can effectively handle chaotic signals and significantly improve the recognition accuracy, making our method a promising candidate for secure biometric verification in emerging IoT applications.