992 resultados para CRITERION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the feasibility of using structural modal strain energy as a parameter employed in correlation- based damage detection method for truss bridge structures. It is an extension of the damage detection method adopting multiple damage location assurance criterion. In this paper, the sensitivity of modal strain energy to damage obtained from the analytical model is incorporated into the correlation objective function. Firstly, the sensitivity matrix of modal strain energy to damage is conducted offline, and for an arbitrary damage case, the correlation coefficient (objective function) is calculated by multiplying the sensitivity matrix and damage vector. Then, a genetic algorithm is used to iteratively search the damage vector maximising the correlation between the corresponding modal strain energy change (hypothesised) and its counterpart in measurement. The proposed method is simulated and compared with the conventional methods, e.g. frequency-error method, coordinate modal assurance criterion and multiple damage location assurance criterion using mode shapes on a numerical truss bridge structure. The result demonstrates the modal strain energy correlation method is able to yield acceptable damage detection outcomes with less computing efforts, even in a noise contaminated condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than 2 is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree 3, 4 or 6. A few papers have presented explicit formulas for the doubling and the addition step in Miller’s algorithm, but the optimizations were all done for the Tate pairing with degree-2 twists, so the main usage of the high- degree twists remained incompatible with more efficient formulas. In this paper we present efficient formulas for curves with twists of degree 2, 3, 4 or 6. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a process for evolving a stable humanoid walking gait that is based around parameterised loci of motion. The parameters of the loci are chosen by an evolutionary process based on the criteria that the robot's ZMP (zero moment point) follows a desirable path. The paper illustrates the evolution of a straight line walking gait. The gait has been tested on a 1.2 m tall humanoid robot (GuRoo). The results, apart form illustrating a successful walk, illustrate the effectiveness of the ZMP path criterion in not only ensuring a stable walk, but also in achieving efficient use of the actuators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study was designed to derive central and peripheral oxygen transmissibility (Dk/t) thresholds for soft contact lenses to avoid hypoxia-induced corneal swelling (increased corneal thickness) during open eye wear. Central and peripheral corneal thicknesses were measured in a masked and randomized fashion for the left eye of each of seven subjects before and after 3 h of afternoon wear of five conventional hydrogel and silicone hydrogel contact lens types offering a range of Dk/t from 2.4 units to 115.3 units. Curve fitting for plots of change in corneal thickness versus central and peripheral Dk/t found threshold values of 19.8 and 32.6 units to avoid corneal swelling during open eye contact lens wear for a typical wearer. Although some conventional hydrogel soft lenses are able to achieve this criterion for either central or peripheral lens areas (depending on lens power), in general, no conventional hydrogel soft lenses meet both the central and peripheral thresholds. Silicone hydrogel contact lenses typically meet both the central and peripheral thresholds and use of these lenses therefore avoids swelling in all regions of the cornea. ' 2009 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater 92B: 361–365, 2010

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interactional competence has emerged as a focal point for language testing researchers in recent years. In spoken communication involving two or more interlocutors, the co-construction of discourse is central to successful interaction. The acknowledgement of co-construction has led to concern over the impact of the interlocutor and the separability of performances in speaking tests involving interaction. The purpose of this article is to review recent studies of direct relevance to the construct of interactional competence and its operationalisation by raters in the context of second language speaking tests. The review begins by tracing the emergence of interaction as a criterion in speaking tests from a theoretical perspective, and then focuses on research salient to interactional effectiveness that has been carried out in the context of language testing interviews and group and paired speaking tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background / context: The ALTC WIL Scoping Study identified a need to develop innovative assessment methods for work integrated learning (WIL) that encourage reflection and integration of theory and practice within the constraints that result from the level of engagement of workplace supervisors and the ability of academic supervisors to become involved in the workplace. Aims: The aim of this paper is to examine how poster presentations can be used to authentically assess student learning during WIL. Method / Approach: The paper uses a case study approach to evaluate the use of poster presentations for assessment in two internship units at the Queensland University of Technology. The first is a unit in the Faculty of Business where students majoring in advertising, marketing and public relations are placed in a variety of organisations. The second unit is a law unit where students complete placements in government legal offices. Results / Discussion: While poster presentations are commonly used for assessment in the sciences, they are an innovative approach to assessment in the humanities. This paper argues that posters are one way that universities can overcome the substantial challenges of assessing work integrated learning. The two units involved in the case study adopt different approaches to the poster assessment; the Business unit is non-graded and the poster assessment task requires students to reflect on their learning during the internship. The Law unit is graded and requires students to present on a research topic that relates to their internship. In both units the posters were presented during a poster showcase which was attended by students, workplace supervisors and members of faculty. The paper evaluates the benefits of poster presentations for students, workplace supervisors and faculty and proposes some criteria for poster assessment in WIL. Conclusions / Implications: The paper concludes that posters can effectively and authentically assess various learning outcomes in WIL in different disciplines while at the same time offering a means to engage workplace supervisors with academic staff and other students and supervisors participating in the unit. Posters have the ability to demonstrate reflection in learning and are an excellent demonstration of experiential learning and assessing authentically. Keywords: Work integrated learning, assessment, poster presentations, industry engagement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International statistics indicate that occupational, or work-related driving, crashes are the most common cause of workplace injury, death, and absence from work. The majority of research examining unsafe driver behavior in the workplace has relied on general road safety questionnaires. However, past research has failed to consider the organizational context in the use of these questionnaires, and as such, there is ambiguity in the dimensions constituting occupational driving. Using a theoretical model developed by Hockey (1993, 1997), this article proposes and validates a new scale of occupational driver behavior. This scale incorporates four dimensions of driver behavior that are influenced by demanding workplace conditions; speeding, rule violation, inattention, and driving while tired. Following a content validation process, three samples of occupational drivers in Australia were used to assess the scale. Data from the first sample (n=145) were used to reduce the number of scale items and provide an assessment of the factorial validity of the scale. Data from the second sample (n=645) were then used to confirm the factor structure and psychometric properties of the scale including reliability and construct validity. Finally, data from the third sample (n=248) were used to establish criterion validity. The results indicated that the scale is a reliable and valid measure of occupational driver behavior.