15 resultados para Muti-Modal Biometrics, User Authentication, Fingerprint Recognition, Palm Print Recognition
em Universidade do Minho
Resumo:
Immune systems have been used in the last years to inspire approaches for several computational problems. This paper focus on behavioural biometric authentication algorithms’ accuracy enhancement by using them more than once and with different thresholds in order to first simulate the protection provided by the skin and then look for known outside entities, like lymphocytes do. The paper describes the principles that support the application of this approach to Keystroke Dynamics, an authentication biometric technology that decides on the legitimacy of a user based on his typing pattern captured on he enters the username and/or the password and, as a proof of concept, the accuracy levels of one keystroke dynamics algorithm when applied to five legitimate users of a system both in the traditional and in the immune inspired approaches are calculated and the obtained results are compared.
Resumo:
Biometric systems are increasingly being used as a means for authentication to provide system security in modern technologies. The performance of a biometric system depends on the accuracy, the processing speed, the template size, and the time necessary for enrollment. While much research has focused on the first three factors, enrollment time has not received as much attention. In this work, we present the findings of our research focused upon studying user’s behavior when enrolling in a biometric system. Specifically, we collected information about the user’s availability for enrollment in respect to the hand recognition systems (e.g., hand geometry, palm geometry or any other requiring positioning the hand on an optical scanner). A sample of 19 participants, chosen randomly apart their age, gender, profession and nationality, were used as test subjects in an experiment to study the patience of users enrolling in a biometric hand recognition system.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
In this paper, we present an integrated system for real-time automatic detection of human actions from video. The proposed approach uses the boundary of humans as the main feature for recognizing actions. Background subtraction is performed using Gaussian mixture model. Then, features are extracted from silhouettes and Vector Quantization is used to map features into symbols (bag of words approach). Finally, actions are detected using the Hidden Markov Model. The proposed system was validated using a newly collected real- world dataset. The obtained results show that the system is capable of achieving robust human detection, in both indoor and outdoor environments. Moreover, promising classification results were achieved when detecting two basic human actions: walking and sitting.
Resumo:
Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.
Resumo:
Palm oil (PO) is a very important commodity for many countries and especially Indonesia and Malaysia who are the predominant producers. PO is used in ca. 30% of supermarket foods, cosmetics, cooking and as biodiesel. The growth of oil palms in plantations is controversial as the production methods contribute to climate change and cause environmental damage [1]. The plant is subjected to a devastating disease in these two countries caused by the white rot fungus Ganoderma. There are no satisfactory methods to diagnose the disease in the plant as they are too slow and/or inaccurate. The lipid compound ergosterol is unique to fungi and is used to measure growth especially in solid substrates. We report here on the use of ergosterol to measure the growth of Ganoderma in oil palms using HPLC and TLC methods [2]. The method is rapid and correlates well with other methods and is capable of being used on-site, hence improving the speed of analysis and allowing remedial action. Climate change will affect the health of OP [1] and rapid detection methods will be increasingly required to control the disease. [1] Paterson, RRM, Kumar, L, Taylor, S, Lima N. Future climate effects on suitability for growth of oil palms in Malaysia and Indonesia. Scientific Reports, 5, 2015, 14457. [2] Muniroh, MS, Sariah M, Zainal Abidin, MA, Lima, N, Paterson, RRM. Rapid detection of Ganoderma-infected oil palms by microwave ergosterol extraction with HPLC and TLC. Journal of Microbiological Methods, 100, 2014, 143–147.
Resumo:
Open Display Networks have the potential to allow many content creators to publish their media to an open-ended set of screen displays. However, this raises the issue of how to match that content to the right displays. In this study, we aim to understand how the perceived utility of particular media sharing scenarios is affected by three independent variables, more specifically: (a) the locativeness of the content being shared; (b) how personal that content is and (c) the scope in which it is being shared. To assess these effects, we composed a set of 24 media sharing scenarios embedded with different treatments of our three independent variables. We then asked 100 participants to express their perception of the relevance of those scenarios. The results suggest a clear preference for scenarios where content is both local and directly related to the person that is publishing it. This is in stark contrast to the types of content that are commonly found in public displays, and confirms the opportunity that open displays networks may represent a new media for self-expression. This novel understanding may inform the design of new publication paradigms that will enable people to share media across the display networks.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
Natural mineral waters (still), effervescent natural mineral waters (sparkling) and aromatized waters with fruit-flavors (still or sparkling) are an emerging market. In this work, the capability of a potentiometric electronic tongue, comprised with lipid polymeric membranes, to quantitatively estimate routinely quality physicochemical parameters (pH and conductivity) as well as to qualitatively classify water samples according to the type of water was evaluated. The study showed that a linear discriminant model, based on 21 sensors selected by the simulated annealing algorithm, could correctly classify 100 % of the water samples (leave-one out cross-validation). This potential was further demonstrated by applying a repeated K-fold cross-validation (guaranteeing that at least 15 % of independent samples were only used for internal-validation) for which 96 % of correct classifications were attained. The satisfactory recognition performance of the E-tongue could be attributed to the pH, conductivity, sugars and organic acids contents of the studied waters, which turned out in significant differences of sweetness perception indexes and total acid flavor. Moreover, the E-tongue combined with multivariate linear regression models, based on sub-sets of sensors selected by the simulated annealing algorithm, could accurately estimate waters pH (25 sensors: R 2 equal to 0.99 and 0.97 for leave-one-out or repeated K-folds cross-validation) and conductivity (23 sensors: R 2 equal to 0.997 and 0.99 for leave-one-out or repeated K-folds cross-validation). So, the overall satisfactory results achieved, allow envisaging a potential future application of electronic tongue devices for bottled water analysis and classification.