258 resultados para CRITERION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a process for evolving a stable humanoid walking gait that is based around parameterised loci of motion. The parameters of the loci are chosen by an evolutionary process based on the criteria that the robot's ZMP (zero moment point) follows a desirable path. The paper illustrates the evolution of a straight line walking gait. The gait has been tested on a 1.2 m tall humanoid robot (GuRoo). The results, apart form illustrating a successful walk, illustrate the effectiveness of the ZMP path criterion in not only ensuring a stable walk, but also in achieving efficient use of the actuators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study was designed to derive central and peripheral oxygen transmissibility (Dk/t) thresholds for soft contact lenses to avoid hypoxia-induced corneal swelling (increased corneal thickness) during open eye wear. Central and peripheral corneal thicknesses were measured in a masked and randomized fashion for the left eye of each of seven subjects before and after 3 h of afternoon wear of five conventional hydrogel and silicone hydrogel contact lens types offering a range of Dk/t from 2.4 units to 115.3 units. Curve fitting for plots of change in corneal thickness versus central and peripheral Dk/t found threshold values of 19.8 and 32.6 units to avoid corneal swelling during open eye contact lens wear for a typical wearer. Although some conventional hydrogel soft lenses are able to achieve this criterion for either central or peripheral lens areas (depending on lens power), in general, no conventional hydrogel soft lenses meet both the central and peripheral thresholds. Silicone hydrogel contact lenses typically meet both the central and peripheral thresholds and use of these lenses therefore avoids swelling in all regions of the cornea. ' 2009 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater 92B: 361–365, 2010

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interactional competence has emerged as a focal point for language testing researchers in recent years. In spoken communication involving two or more interlocutors, the co-construction of discourse is central to successful interaction. The acknowledgement of co-construction has led to concern over the impact of the interlocutor and the separability of performances in speaking tests involving interaction. The purpose of this article is to review recent studies of direct relevance to the construct of interactional competence and its operationalisation by raters in the context of second language speaking tests. The review begins by tracing the emergence of interaction as a criterion in speaking tests from a theoretical perspective, and then focuses on research salient to interactional effectiveness that has been carried out in the context of language testing interviews and group and paired speaking tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background / context: The ALTC WIL Scoping Study identified a need to develop innovative assessment methods for work integrated learning (WIL) that encourage reflection and integration of theory and practice within the constraints that result from the level of engagement of workplace supervisors and the ability of academic supervisors to become involved in the workplace. Aims: The aim of this paper is to examine how poster presentations can be used to authentically assess student learning during WIL. Method / Approach: The paper uses a case study approach to evaluate the use of poster presentations for assessment in two internship units at the Queensland University of Technology. The first is a unit in the Faculty of Business where students majoring in advertising, marketing and public relations are placed in a variety of organisations. The second unit is a law unit where students complete placements in government legal offices. Results / Discussion: While poster presentations are commonly used for assessment in the sciences, they are an innovative approach to assessment in the humanities. This paper argues that posters are one way that universities can overcome the substantial challenges of assessing work integrated learning. The two units involved in the case study adopt different approaches to the poster assessment; the Business unit is non-graded and the poster assessment task requires students to reflect on their learning during the internship. The Law unit is graded and requires students to present on a research topic that relates to their internship. In both units the posters were presented during a poster showcase which was attended by students, workplace supervisors and members of faculty. The paper evaluates the benefits of poster presentations for students, workplace supervisors and faculty and proposes some criteria for poster assessment in WIL. Conclusions / Implications: The paper concludes that posters can effectively and authentically assess various learning outcomes in WIL in different disciplines while at the same time offering a means to engage workplace supervisors with academic staff and other students and supervisors participating in the unit. Posters have the ability to demonstrate reflection in learning and are an excellent demonstration of experiential learning and assessing authentically. Keywords: Work integrated learning, assessment, poster presentations, industry engagement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International statistics indicate that occupational, or work-related driving, crashes are the most common cause of workplace injury, death, and absence from work. The majority of research examining unsafe driver behavior in the workplace has relied on general road safety questionnaires. However, past research has failed to consider the organizational context in the use of these questionnaires, and as such, there is ambiguity in the dimensions constituting occupational driving. Using a theoretical model developed by Hockey (1993, 1997), this article proposes and validates a new scale of occupational driver behavior. This scale incorporates four dimensions of driver behavior that are influenced by demanding workplace conditions; speeding, rule violation, inattention, and driving while tired. Following a content validation process, three samples of occupational drivers in Australia were used to assess the scale. Data from the first sample (n=145) were used to reduce the number of scale items and provide an assessment of the factorial validity of the scale. Data from the second sample (n=645) were then used to confirm the factor structure and psychometric properties of the scale including reliability and construct validity. Finally, data from the third sample (n=248) were used to establish criterion validity. The results indicated that the scale is a reliable and valid measure of occupational driver behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A better understanding of the behaviour of prepared cane and bagasse during the crushing process is believed to be an essential prerequisite for further improvements to the crushing process. Improvements could be made, for example, in throughput, sugar extraction, and bagasse moisture. The ability to model the mechanical behaviour of bagasse as it is squeezed in a milling unit to extract juice would help identify how to improve the current process to reduce final bagasse moisture. However an adequate mechanical model for bagasse is currently not available. Previous investigations have proven with certainty that juice flow through bagasse obeys Darcy’s permeability law, that the grip of the rough surface of the grooves on the bagasse can be represented by the Mohr- Coulomb failure criterion for soils, and that the internal mechanical behaviour of the bagasse is critical state behaviour similar to that for sand and clay. Current Finite Element Models (FEM) available in commercial software have adequate permeability models. However, the same commercial software do not contain an adequate mechanical model for bagasse. Progress has been made in the last ten years towards implementing a mechanical model for bagasse in finite element software code. This paper builds on that progress and carries out a further step towards obtaining an adequate material model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.