985 resultados para Error estimate.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the authors use an exponential generalized autoregressive conditional heteroscedastic (EGARCH) error-correction model (ECM), that is, EGARCH-ECM, to estimate the pass-through effects of foreign exchange (FX) rates and producers’ prices for 20 U.K. export sectors. The long-run adjustment of export prices to FX rates and producers’ prices is within the range of -1.02% (for the Textiles sector) and -17.22% (for the Meat sector). The contemporaneous pricing-to-market (PTM) coefficient is within the range of -72.84% (for the Fuels sector) and -8.05% (for the Textiles sector). Short-run FX rate pass-through is not complete even after several months. Rolling EGARCH-ECMs show that the short and long-run effects of FX rate and producers’ prices fluctuate substantially as are asymmetry and volatility estimates before equilibrium is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we experimentally study the statistical properties of a received QPSK modulated signal and compare various bit error rate (BER) estimation methods for coherent optical orthogonal frequency division multiplexing transmission. We show that the statistical BER estimation method based on the probability density function of the received QPSK symbols offers the most accurate estimate of the system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A statistical approach to evaluate numerically transmission distances in optical communication systems was described. The proposed systems were subjected to strong patterning effects and strong intersymbol interference. The dependence of transmission distance on the total number of bits was described. Normal and Gaussian distributions were used to derive the error probability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coherent optical orthogonal frequency division multiplexing (CO-OFDM) is an attractive transmission technique to virtually eliminate intersymbol interference caused by chromatic dispersion and polarization-mode dispersion. Design, development, and operation of CO-OFDM systems require simple, efficient, and reliable methods of their performance evaluation. In this paper, we demonstrate an accurate bit error rate estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. By comparing with other known approaches, including data-aided and nondata-aided error vector magnitude, we show that the proposed method offers the most accurate estimate of the system performance for both single channel and wavelength division multiplexing QPSK CO-OFDM transmission systems. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We demonstrate an accurate BER estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. Using a 112Gbs QPSK CO-OFDM transmission as an example, we show that this method offers the most accurate estimate of the system's performance in comparison with other known approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we show how to accurately perform a quasi-a priori estimation of the truncation error of steady-state solutions computed by a discontinuous Galerkin spectral element method. We estimate the spatial truncation error using the ?-estimation procedure. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, we use non time-converged solutions on one grid with different polynomial orders. The quasi-a priori approach estimates the error while the residual of the time-iterative method is not negligible. Furthermore, the method permits one to decouple the surface and the volume contributions of the truncation error, and provides information about the anisotropy of the solution as well as its rate of convergence in polynomial order. First, we focus on the analysis of one dimensional scalar conservation laws to examine the accuracy of the estimate. Then, we extend the analysis to two dimensional problems. We demonstrate that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pesticides applications have been described by many researches as a very inefficient process. In some cases, there are reports that only 0.02% of the applied products are used for the effective control of the problem. The main factor that influences pesticides applications is the droplet size formed on spraying nozzles. Many parameters affects the dynamic of the droplets, like wind, temperature, relative humidity, and others. Small droplets are biologically more active, but they are affected by evaporation and drift. On the other hand, the great droplets do not promote a good distribution of the product on the target. In this sense, associated with the risk of non target areas contamination and with the high costs involved in applications, the knowledge of the droplet size is of fundamental importance in the application technology. When sophisticated technology for droplets analysis is unavailable, is common the use of artificial targets like water-sensitive paper to sample droplets. On field sampling, water-sensitive papers are placed on the trials where product will be applied. When droplets impinging on it, the yellow surface of this paper will be stained dark blue, making easy their recognition. Collected droplets on this papers have different kinds of sizes. In this sense, the determination of the droplet size distribution gives a mass distribution of the material and so, the efficience of the application of the product. The stains produced by droplets shows a spread factor proportional to their respectives initial sizes. One of methodologies to analyse the droplets is a counting and measure of the droplets made in microscope. The Porton N-G12 graticule, that shows equaly spaces class intervals on geometric progression of square 2, are coulpled to the lens of the microscope. The droplet size parameters frequently used are the Volumetric Median Diameter (VMD) and the Numeric Median Diameter. On VMD value, a representative droplets sample is divided in two equal parts of volume, in such away one part contains droplets of sizes smaller than VMD and the other part contains droplets of sizes greater that VMD. The same process is done to obtaining the NMD, which divide the sample in two equal parts in relation to the droplets size. The ratio between VMD and NMD allows the droplets uniformity evaluation. After that, the graphics of accumulated probability of the volume and size droplets are plotted on log scale paper (accumulated probability versus median diameter of each size class). The graphics provides the NMD on the x-axes point corresponding to the value of 50% founded on the y-axes. All this process is very slow and subjected to operator error. So, in order to decrease the difficulty envolved with droplets measuring it was developed a numeric model, implemented on easy and accessfull computational language, which allows approximate VMD and NMD values, with good precision. The inputs to this model are the frequences of the droplets sizes colected on the water-sensitive paper, observed on the Porton N-G12 graticule fitted on microscope. With these data, the accumulated distribution of the droplet medium volumes and sizes are evaluated. The graphics obtained by plotting this distributions allow to obtain the VMD and NMD using linear interpolation, seen that on the middle of the distributions the shape of the curves are linear. These values are essential to evaluate the uniformity of droplets and to estimate the volume deposited on the observed paper by the density (droplets/cm2). This methodology to estimate the droplets volume was developed by 11.0.94.224 Project of the CNPMA/EMBRAPA. Observed data of herbicides aerial spraying samples, realized by Project on Pelotas/RS county, were used to compare values obtained manual graphic method and with those obtained by model has shown, with great precision, the values of VMD and NMD on each sampled collector, allowing to estimate a quantities of deposited product and, by consequence, the quantities losses by drifty. The graphics of variability of VMD and NMD showed that the quantity of droplets that reachs the collectors had a short dispersion, while the deposited volume shows a great interval of variation, probably because the strong action of air turbulence on the droplets distribution, enfasizing the necessity of a deeper study to verify this influences on drift.