985 resultados para Error estimate.
Resumo:
An algorithm based on the concept of combining Kalman filter and Least Error Square (LES) techniques is proposed in this paper. The algorithm is intended to estimate signal attributes like amplitude, frequency and phase angle in the online mode. This technique can be used in protection relays, digital AVRs, DGs, DSTATCOMs, FACTS and other power electronics applications. The Kalman filter is modified to operate on a fictitious input signal and provides precise estimation results insensitive to noise and other disturbances. At the same time, the LES system has been arranged to operate in critical transient cases to compensate the delay and inaccuracy identified because of the response of the standard Kalman filter. Practical considerations such as the effect of noise, higher order harmonics, and computational issues of the algorithm are considered and tested in the paper. Several computer simulations and a laboratory test are presented to highlight the usefulness of the proposed method. Simulation results show that the proposed technique can simultaneously estimate the signal attributes, even if it is highly distorted due to the presence of non-linear loads and noise.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
Bayesian networks (BNs) are graphical probabilistic models used for reasoning under uncertainty. These models are becoming increasing popular in a range of fields including ecology, computational biology, medical diagnosis, and forensics. In most of these cases, the BNs are quantified using information from experts, or from user opinions. An interest therefore lies in the way in which multiple opinions can be represented and used in a BN. This paper proposes the use of a measurement error model to combine opinions for use in the quantification of a BN. The multiple opinions are treated as a realisation of measurement error and the model uses the posterior probabilities ascribed to each node in the BN which are computed from the prior information given by each expert. The proposed model addresses the issues associated with current methods of combining opinions such as the absence of a coherent probability model, the lack of the conditional independence structure of the BN being maintained, and the provision of only a point estimate for the consensus. The proposed model is applied an existing Bayesian Network and performed well when compared to existing methods of combining opinions.
Resumo:
Grass (monocots) and non-grass (dicots) proportions in ruminant diets are important nutritionally because the non-grasses are usually higher in nutritive value, particularly protein, than the grasses, especially in tropical pastures. For ruminants grazing tropical pastures where the grasses are C-4 species and most non-grasses are C-3 species, the ratio of C-13/C-12 in diet and faeces, measured as delta C-13 parts per thousand, is proportional to dietary non-grass%. This paper describes the development of a faecal near infrared (NIR) spectroscopy calibration equation for predicting faecal delta C-13 from which dietary grass and non-grass proportions can be calculated. Calibration development used cattle faeces derived from diets containing only C-3 non-grass and C-4 grass components, and a series of expansion and validation steps was employed to develop robustness and predictive reliability. The final calibration equation contained 1637 samples and faecal delta C-13 range (parts per thousand) of [12.27]-[27.65]. Calibration statistics were: standard error of calibration (SEC) of 0.78, standard error of cross-validation (SECV) of 0.80, standard deviation (SD) of reference values of 3.11 and R-2 of 0.94. Validation statistics for the final calibration equation applied to 60 samples were: standard error of prediction (SEP) of 0.87, bias of -0.15, R-2 of 0.92 and RPD of 3.16. The calibration equation was also tested on faeces from diets containing C-4 non-grass species or temperate C-3 grass species. Faecal delta C-13 predictions indicated that the spectral basis of the calibration was not related to C-13/C-12 ratios per se but to consistent differences between grasses and non-grasses in chemical composition and that the differences were modified by photosynthetic pathway. Thus, although the calibration equation could not be used to make valid faecal delta C-13 predictions when the diet contained either C-3 grass or C-4 non-grass, it could be used to make useful estimates of dietary non-grass proportions. It could also be ut :sed to make useful estimates of non-grass in mixed C-3 grass/non-grass diets by applying a modified formula to calculate non-grass from predicted faecal delta C-13. The development of a robust faecal-NIR calibration equation for estimating non-grass proportions in the diets of grazing cattle demonstrated a novel and useful application of NIR spectroscopy in agriculture.
Resumo:
A residual-based strategy to estimate the local truncation error in a finite volume framework for steady compressible flows is proposed. This estimator, referred to as the -parameter, is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. The behaviour of the residual estimator for linear and non-linear hyperbolic problems is systematically analysed. The relationship of the residual to the global error is also studied. The -parameter is used to derive a target length scale and consequently devise a suitable criterion for refinement/derefinement. This strategy, devoid of any user-defined parameters, is validated using two standard test cases involving smooth flows. A hybrid adaptive strategy based on both the error indicators and the -parameter, for flows involving shocks is also developed. Numerical studies on several compressible flow cases show that the adaptive algorithm performs excellently well in both two and three dimensions.
Resumo:
By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.
Resumo:
By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.
Resumo:
By deriving the equations for an error analysis of modeling inaccuracies for the combined estimation and control problem, it is shown that the optimum estimation error is orthogonal to the actual suboptimum estimate.
Resumo:
This paper proposes a sensorless vector control scheme for general-purpose induction motor drives using the current error space phasor-based hysteresis controller. In this paper, a new technique for sensorless operation is developed to estimate rotor voltage and hence rotor flux position using the stator current error during zero-voltage space vectors. It gives a comparable performance with the vector control drive using sensors especially at a very low speed of operation (less than 1 Hz). Since no voltage sensing is made, the dead-time effect and loss of accuracy in voltage sensing at low speed are avoided here, with the inherent advantages of the current error space phasor-based hysteresis controller. However, appropriate device on-state drops are compensated to achieve a steady-state operation up to less than 1 Hz. Moreover, using a parabolic boundary for current error, the switching frequency of the inverter can be maintained constant for the entire operating speed range. Simple sigma L-s estimation is proposed, and the parameter sensitivity of the control scheme to changes in stator resistance, R-s is also investigated in this paper. Extensive experimental results are shown at speeds less than 1 Hz to verify the proposed concept. The same control scheme is further extended from less than 1 Hz to rated 50 Hz six-step operation of the inverter. Here, the magnetic saturation is ignored in the control scheme.
Resumo:
The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, mu(c), however, is not known. Application of the quasispecies theory to determine mu(c) poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and mu(c). We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated mu(c) to be 7 x 10(-5) -1 x 10(-4) substitutions/site/replication, similar to 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, mu(c) increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of mu(c) may serve as a quantitative guideline for the use of mutagenic drugs against HIV-1.
Resumo:
We consider the design of a linear equalizer with a finite number of coefficients in the context of a classical linear intersymbol-interference channel with additive Gaussian noise for channel estimation. Previous literature has shown that Minimum Bit Error Rate(MBER) based detection has outperformed Minimum Mean Squared Error (MMSE) based detection. We pose the channel estimation problem as a detection problem and propose a novel algorithm to estimate the channel based on the MBER framework for BPSK signals. It is shown that the proposed algorithm reduces BER compared to an MMSE based channel estimation when used in MMSE or MBER detection.
Resumo:
Bilateral filters perform edge-preserving smoothing and are widely used for image denoising. The denoising performance is sensitive to the choice of the bilateral filter parameters. We propose an optimal parameter selection for bilateral filtering of images corrupted with Poisson noise. We employ the Poisson's Unbiased Risk Estimate (PURE), which is an unbiased estimate of the Mean Squared Error (MSE). It does not require a priori knowledge of the ground truth and is useful in practical scenarios where there is no access to the original image. Experimental results show that quality of denoising obtained with PURE-optimal bilateral filters is almost indistinguishable with that of the Oracle-MSE-optimal bilateral filters.
Resumo:
This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.
Resumo:
We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.