29 resultados para Place recognition algorithm
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
We study the predictability of a theoretical model for earthquakes, using a pattern recognition algorithm similar to the CN and M8 algorithms known in seismology. The model, which is a stochastic spring-block model with both global correlation and local interaction, becomes more predictable as the strength of the global correlation or the local interaction is increased.
Resumo:
This paper argues that biometric verification evaluations can obscure vulnerabilities that increase the chances that an attacker could be falsely accepted. This can occur because existing evaluations implicitly assume that an imposter claiming a false identity would claim a random identity rather than consciously selecting a target to impersonate. This paper shows how an attacker can select a target with a similar biometric signature in order to increase their chances of false acceptance. It demonstrates this effect using a publicly available iris recognition algorithm. The evaluation shows that the system can be vulnerable to attackers targeting subjects who are enrolled with a smaller section of iris due to occlusion. The evaluation shows how the traditional DET curve analysis conceals this vulnerability. As a result, traditional analysis underestimates the importance of an existing score normalisation method for addressing occlusion. The paper concludes by evaluating how the targeted false acceptance rate increases with the number of available targets. Consistent with a previous investigation of targeted face verification performance, the experiment shows that the false acceptance rate can be modelled using the traditional FAR measure with an additional term that is proportional to the logarithm of the number of available targets.
Resumo:
This paper investigates the problem of speaker identi-fication and verification in noisy conditions, assuming that speechsignals are corrupted by environmental noise, but knowledgeabout the noise characteristics is not available. This research ismotivated in part by the potential application of speaker recog-nition technologies on handheld devices or the Internet. Whilethe technologies promise an additional biometric layer of securityto protect the user, the practical implementation of such systemsfaces many challenges. One of these is environmental noise. Due tothe mobile nature of such systems, the noise sources can be highlytime-varying and potentially unknown. This raises the require-ment for noise robustness in the absence of information about thenoise. This paper describes a method that combines multicondi-tion model training and missing-feature theory to model noisewith unknown temporal-spectral characteristics. Multiconditiontraining is conducted using simulated noisy data with limitednoise variation, providing a “coarse” compensation for the noise,and missing-feature theory is applied to refine the compensationby ignoring noise variation outside the given training conditions,thereby reducing the training and testing mismatch. This paperis focused on several issues relating to the implementation of thenew model for real-world applications. These include the gener-ation of multicondition training data to model noisy speech, thecombination of different training data to optimize the recognitionperformance, and the reduction of the model’s complexity. Thenew algorithm was tested using two databases with simulated andrealistic noisy speech data. The first database is a redevelopmentof the TIMIT database by rerecording the data in the presence ofvarious noise types, used to test the model for speaker identifica-tion with a focus on the varieties of noise. The second database isa handheld-device database collected in realistic noisy conditions,used to further validate the model for real-world speaker verifica-tion. The new model is compared to baseline systems and is foundto achieve lower error rates.
Resumo:
This paper introduces a recursive rule base adjustment to enhance the performance of fuzzy logic controllers. Here the fuzzy controller is constructed on the basis of a decision table (DT), relying on membership functions and fuzzy rules that incorporate heuristic knowledge and operator experience. If the controller performance is not satisfactory, it has previously been suggested that the rule base be altered by combined tuning of membership functions and controller scaling factors. The alternative approach proposed here entails alteration of the fuzzy rule base. The recursive rule base adjustment algorithm proposed in this paper has the benefit that it is computationally more efficient for the generation of a DT, and advantage for online realization. Simulation results are presented to support this thesis. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
T cell immune responses to central nervous system-derived and other self-antigens are commonly described in both healthy and autoimmune individuals. However, in the case of the human prion protein (PrP), it has been argued that immunologic tolerance is uncommonly robust. Although development of an effective vaccine for prion disease requires breaking of tolerance to PrP, the extent of immune tolerance to PrP and the identity of immunodominant regions of the protein have not previously been determined in humans. We analyzed PrP T cell epitopes both by using a predictive algorithm and by measuring functional immune responses from healthy donors. Interestingly, clusters of epitopes were focused around the area of the polymorphic residue 129, previously identified as an indicator of susceptibility to prion disease, and in the C-terminal region. Moreover, responses were seen to PrP peptide 121-134 containing methionine at position 129, whereas PrP 121-134 [129V] was not immunogenic. The residue 129 polymorphism was also associated with distinct patterns of cytokine response: PrP 128-141 [129M] inducing IL-4 and IL-6 production, which was not seen in response to PrP 128-141 [129V]. Our data suggest that the immunogenic regions of human PrP lie between residue 107 and the C-terminus and that, like with many other central nervous system antigens, healthy individuals carry responses to PrP within the T cell repertoire and yet do not experience deleterious autoimmune reactions.
Resumo:
The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
For some time there is a large interest in variable step-size methods for adaptive filtering. Recently, a few stochastic gradient algorithms have been proposed, which are based on cost functions that have exponential dependence on the chosen error. However, we have experienced that the cost function based on exponential of the squared error does not always satisfactorily converge. In this paper we modify this cost function in order to improve the convergence of exponentiated cost function and the novel ECVSS (exponentiated convex variable step-size) stochastic gradient algorithm is obtained. The proposed technique has attractive properties in both stationary and abrupt-change situations. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.
Resumo:
Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.
Resumo:
This paper presents a feature selection method for data classification, which combines a model-based variable selection technique and a fast two-stage subset selection algorithm. The relationship between a specified (and complete) set of candidate features and the class label is modelled using a non-linear full regression model which is linear-in-the-parameters. The performance of a sub-model measured by the sum of the squared-errors (SSE) is used to score the informativeness of the subset of features involved in the sub-model. The two-stage subset selection algorithm approaches a solution sub-model with the SSE being locally minimized. The features involved in the solution sub-model are selected as inputs to support vector machines (SVMs) for classification. The memory requirement of this algorithm is independent of the number of training patterns. This property makes this method suitable for applications executed in mobile devices where physical RAM memory is very limited. An application was developed for activity recognition, which implements the proposed feature selection algorithm and an SVM training procedure. Experiments are carried out with the application running on a PDA for human activity recognition using accelerometer data. A comparison with an information gain based feature selection method demonstrates the effectiveness and efficiency of the proposed algorithm.