800 resultados para Face recognition from video
Resumo:
Poster apresentado no XXIV Congresso da Ordem dos Médicos Dentistas, 12-14 Novembro 2015, Lisboa.
Resumo:
Mode of access: Internet.
Resumo:
This study compares interpreter-mediated face-to-face Magistrates Court hearings with those conducted through prison video link in which interpreters are located in court and non- English-speaking defendants in prison. It seeks to examine the impact that the presence of video link has on court actors in terms of interaction and behaviour. The data comprises 11 audio-recordings of face-to-face hearings, 10 recordings of prison video link hearings, semistructured interviews with 27 court actors, and ethnographic observation of hearings as viewed by defendants in Wormwood Scrubs prison in London. The over-arching theme is the pervasive influence of the ecology of the courtroom upon all court actors in interpretermediated hearings and thus on the communication process. Close analysis of the court transcripts shows that their relative proximity to one another can be a determinant of status, interpreting role, mode and volume. The very few legal protocols which apply to interpretermediated cases (acknowledging and ratifying the interpreter, for example), are often forgotten or dispensed with. Court interpreters lack proper training in the specific challenges of court interpreting, whether they are co-present with the defendant or not. Other court actors often misunderstand the interpreter’s role. This has probably come about because courts have adjusted their perceptions of what they think interpreters are supposed to do based on their own experiences of working with them, and have gradually come to accept poor practice (the inability to perform simultaneous interpreting, for example) as the norm. In video link courts, mismatches of sound and image due to court clerks’ failure to adequately track current speakers, poor image and sound quality and the fact that non-English-speaking defendants in pre-and post-court consultations can see and hear interpreters but not their defence advocates are just some of the additional layers of disadvantage and confusion already suffered by non- English-speaking defendants. These factors make it less likely that justice will be done.
Resumo:
Four patients that had received an anterior cingulotomy (ACING) and five patients that had received both an ACING and an anterior capsulotomy (ACAPS) as an intervention for chronic, treatment refractory depression were presented with a series of dynamic emotional stimuli and invited to identify the emotion portrayed. Their performance was compared with that of a group of non-surgically treated patients with major depression (n = 17) and with a group of matched, never-depressed controls (n = 22). At the time of testing, four of the nine neurosurgery patients had recovered from their depressive episode, whereas five remained depressed. Analysis of emotion recognition accuracy revealed no significant differences between depressed and non-depressed neurosurgically treated patients. Similarly, no significant differences were observed between the patients treated with ACING alone and those treated with both ACING and ACAPS. Comparison of the emotion recognition accuracy of the neurosurgically treated patients and the depressed and healthy control groups revealed that the surgically treated patients exhibited a general impairment in their recognition accuracy compared to healthy controls. Regression analysis revealed that participants' emotion recognition accuracy was predicted by the number of errors they made on the Stroop colour-naming task. It is plausible that the observed deficit in emotion recognition accuracy was a consequence of impaired attentional control, which may have been a result of the surgical lesions to the anterior cingulate cortex. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
The article describes researches of a method of person recognition by face image based on Gabor wavelets. Scales of Gabor functions are determined at which the maximal percent of recognition for search of a person in a database and minimal percent of mistakes due to false alarm errors when solving an access control task is achieved. The carried out researches have shown a possibility of improvement of recognition system work parameters in the specified two modes when the volume of used data is reduced.
Resumo:
[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.
Resumo:
Clustering identities in a video is a useful task to aid in video search, annotation and retrieval, and cast identification. However, reliably clustering faces across multiple videos is challenging task due to variations in the appearance of the faces, as videos are captured in an uncontrolled environment. A person's appearance may vary due to session variations including: lighting and background changes, occlusions, changes in expression and make up. In this paper we propose the novel Local Total Variability Modelling (Local TVM) approach to cluster faces across a news video corpus; and incorporate this into a novel two stage video clustering system. We first cluster faces within a single video using colour, spatial and temporal cues; after which we use face track modelling and hierarchical agglomerative clustering to cluster faces across the entire corpus. We compare different face recognition approaches within this framework. Experiments on a news video database show that the Local TVM technique is able effectively model the session variation observed in the data, resulting in improved clustering performance, with much greater computational efficiency than other methods.
Resumo:
This dissertation establishes a novel system for human face learning and recognition based on incremental multilinear Principal Component Analysis (PCA). Most of the existing face recognition systems need training data during the learning process. The system as proposed in this dissertation utilizes an unsupervised or weakly supervised learning approach, in which the learning phase requires a minimal amount of training data. It also overcomes the inability of traditional systems to adapt to the testing phase as the decision process for the newly acquired images continues to rely on that same old training data set. Consequently when a new training set is to be used, the traditional approach will require that the entire eigensystem will have to be generated again. However, as a means to speed up this computational process, the proposed method uses the eigensystem generated from the old training set together with the new images to generate more effectively the new eigensystem in a so-called incremental learning process. In the empirical evaluation phase, there are two key factors that are essential in evaluating the performance of the proposed method: (1) recognition accuracy and (2) computational complexity. In order to establish the most suitable algorithm for this research, a comparative analysis of the best performing methods has been carried out first. The results of the comparative analysis advocated for the initial utilization of the multilinear PCA in our research. As for the consideration of the issue of computational complexity for the subspace update procedure, a novel incremental algorithm, which combines the traditional sequential Karhunen-Loeve (SKL) algorithm with the newly developed incremental modified fast PCA algorithm, was established. In order to utilize the multilinear PCA in the incremental process, a new unfolding method was developed to affix the newly added data at the end of the previous data. The results of the incremental process based on these two methods were obtained to bear out these new theoretical improvements. Some object tracking results using video images are also provided as another challenging task to prove the soundness of this incremental multilinear learning method.
Resumo:
In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.
Resumo:
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.