73 resultados para cumulative error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gay community media functions as a system with three nodes, in which the flows of information and capital theoretically benefit all parties: the gay community gains a sense of cohesion and citizenship through media; the gay media outlets profit from advertisers’ capital; and advertisers recoup their investments in lucrative ‘pink dollar’ revenue. But if a necessary corollary of all communication systems is error or noise, where—and what—are the errors in this system? In this paper we argue that the ‘error’ in the gay media system is Queerness, and that the gay media system ejects (in a process of Kristevan abjection) these Queer identities in order to function successfully. We examine the ways in which Queer identities are excluded from representation in such media through a discourse and content analysis of The Sydney Star Observer (Australia’s largest gay and lesbian paper). First, we analyse the way Queer bodies are excluded from the discourses that construct and reinforce both the ideal gay male body and the notions of homosexual essence required for that body to be meaningful. We then argue that abject Queerness returns in the SSO’s discourses of public health through the conspicuous absence of the AIDS-inflicted body (which we read as the epitome of the abject Queer), since this absence paradoxically conjures up a trace of that which the system tries to expel. We conclude by arguing that because the ‘Queer error’ is integral to the SSO, gay community media should practise a politics of Queer inclusion rather than exclusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regardless of technology benefits, safety planners still face difficulties explaining errors related to the use of different technologies and evaluating how the errors impact the performance of safety decision making. This paper presents a preliminary error impact analysis testbed to model object identification and tracking errors caused by image-based devices and algorithms and to analyze the impact of the errors for spatial safety assessment of earthmoving and surface mining activities. More specifically, this research designed a testbed to model workspaces for earthmoving operations, to simulate safety-related violations, and to apply different object identification and tracking errors on the data collected and processed for spatial safety assessment. Three different cases were analyzed based on actual earthmoving operations conducted at a limestone quarry. Using the testbed, the impacts of the errors were investigated for the safety planning purpose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper uses an aggregate quantity space to decompose the temporal changes in nitrogen use efficiency and cumulative exergy use efficiency into changes of Moorsteen–Bjurek (MB) Total Factor Productivity (TFP) changes and changes in the aggregate nitrogen and cumulative exergy contents. Changes in productivity can be broken into technical change and changes in various efficiency measures such as technical efficiency, scale efficiency and residual mix efficiency. Changes in the aggregate nitrogen and cumulative exergy contents can be driven by changes in the quality of inputs and outputs and changes in the mixes of inputs and outputs. Also with cumulative exergy content analysis, changes in the efficiency in input production can increase or decrease the cumulative exergy transformity of agricultural production. The empirical study in 30 member countries of the Organisation for Economic Co-operation Development from 1990 to 2003 yielded some important findings. The production technology progressed but there were reductions in technical efficiency, scale efficiency and residual mix efficiency levels. This result suggests that the production frontier had shifted up but there existed lags in the responses of member countries to the technological change. Given TFP growth, improvements in nutrient use efficiency and cumulative exergy use efficiency were counteracted by reductions in the changes of the aggregate nitrogen contents ratio and aggregate cumulative exergy contents ratio. The empirical results also confirmed that different combinations of inputs and outputs as well as the quality of inputs and outputs could have more influence on the growth of nutrient and cumulative exergy use efficiency than factors that had driven productivity change. Keywords: Nutrient use efficiency; Cumulative exergy use efficiency; Thermodynamic efficiency change; Productivity growth; OECD agriculture; Sustainability

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.