100 resultados para Competency-Based Approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring the degree of inconsistency of a belief base is an important issue in many real world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base,whilst eachminimal inconsistent subset projects a distinctive viewof the inconsistency. To address these two issues,we propose a normalized framework formeasuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in equirements engineering to illustrate the application of the normalized framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different economic valuation methodologies can be used to value the non-market benefits of an agri-environmental scheme. In particular, the non-market value can be examined by assessing the public's willingness to pay for the policy outputs as a whole or by modelling the preferences of society for the component attributes of the rural landscape that result from the implementation of the policy. In this article we examine whether the welfare values estimated for an agri-environmental policy are significantly different between an holistic valuation methodology (using contingent valuation) and an attribute-based valuation methodology (choice experiment). It is argued that the valuation methodology chosen should be based on whether or not the overall objective is the valuation of the agri-environment policy package in its entirety or the valuation of each of the policy's distinct environmental outputs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In polymer extrusion, delivery of a melt which is homogenous in composition and temperature is important for good product quality. However, the process is inherently prone to temperature fluctuations which are difficult to monitor and control via single point based conventional thermo- couples. In this work, the die melt temperature profile was monitored by a thermocouple mesh and the data obtained was used to generate a model to predict the die melt temperature profile. A novel nonlinear model was then proposed which was demonstrated to be in good agreement with training and unseen data. Furthermore, the proposed model was used to select optimum process settings to achieve the desired average melt temperature across the die while improving the temperature homogeneity. The simulation results indicate a reduction in melt temperature variations of up to 60%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision making is a fundamental clement of any sport, particularly open, fast, dynamic team sports such as football, basketball and rugby. At the elite level, athletes appear to consistently make good decisions in situations that are highly temporally constrained. To further understand how this is done has been the aim of researchers within the perception-action field for several decades. The purpose of this article is to present novel contributions, both theoretical and methodological, that are pushing the boundaries of this area of research. The theoretical framework (Ecological psychology) within which the work is posited will be described, followed by a description of Virtual Reality (VR) technology and how it relates to the theoretical aims. Finally, an applied example will be summarised in order to demonstrate how the theoretical approach and the methodological approach come together in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An ab initio approach has been applied to study multiphoton detachment rates for the negative hydrogen ion in the lowest nonvanishing order of perturbation theory. The approach is based on the use of B splines allowing an accurate treatment of the electronic repulsion. Total detachment rates have been determined for two- to six-photon processes as well as partial rates for detachment into the different final symmetries. It is shown that B-spline expansions can yield accurate continuum and bound-state wave functions in a very simple manner. The calculated total rates for two- and three-photon detachment are in good agreement with other perturbative calculations. For more than three-photon detachment little information has been available before now. While the total cross sections show little structure, a fair amount of structure is predicted in the partial cross sections. In the two-photon process, it is shown that the detached electrons mainly have s character. For four- and six-photon processes, the contribution from the d channel is the most important. For three- and five-photon processes p electrons dominate the electron emission spectrum. Detachment rates for s and p electrons show minima as a function of photon energy. © 1994 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Temporal dynamics and speaker characteristics are two important features of speech that distinguish speech from noise. In this paper, we propose a method to maximally extract these two features of speech for speech enhancement. We demonstrate that this can reduce the requirement for prior information about the noise, which can be difficult to estimate for fast-varying noise. Given noisy speech, the new approach estimates clean speech by recognizing long segments of the clean speech as whole units. In the recognition, clean speech sentences, taken from a speech corpus, are used as examples. Matching segments are identified between the noisy sentence and the corpus sentences. The estimate is formed by using the longest matching segments found in the corpus sentences. Longer speech segments as whole units contain more distinct dynamics and richer speaker characteristics, and can be identified more accurately from noise than shorter speech segments. Therefore, estimation based on the longest recognized segments increases the noise immunity and hence the estimation accuracy. The new approach consists of a statistical model to represent up to sentence-long temporal dynamics in the corpus speech, and an algorithm to identify the longest matching segments between the noisy sentence and the corpus sentences. The algorithm is made more robust to noise uncertainty by introducing missing-feature based noise compensation into the corpus sentences. Experiments have been conducted on the TIMIT database for speech enhancement from various types of nonstationary noise including song, music, and crosstalk speech. The new approach has shown improved performance over conventional enhancement algorithms in both objective and subjective evaluations.