814 resultados para Error correction methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a strong relation between sparse signal recovery and error control coding. It is known that burst errors are block sparse in nature. So, here we attempt to solve burst error correction problem using block sparse signal recovery methods. We construct partial Fourier based encoding and decoding matrices using results on difference sets. These constructions offer guaranteed and efficient error correction when used in conjunction with reconstruction algorithms which exploit block sparsity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple error detecting and correcting procedure is described for nonbinary symbol words; here, the error position is located using the Hamming method and the correct symbol is substituted using a modulo-check procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that n-length stabilizer quantum error correcting codes (QECCs) can be obtained via n-length classical error correction codes (CECCs) over GF(4), that are additive and self-orthogonal with respect to the trace Hermitian inner product. But, most of the CECCs have been studied with respect to the Euclidean inner product. In this paper, it is shown that n-length stabilizer QECCs can be constructed via 371 length linear CECCs over GF(2) that are self-orthogonal with respect to the Euclidean inner product. This facilitates usage of the widely studied self-orthogonal CECCs to construct stabilizer QECCs. Moreover, classical, binary, self-orthogonal cyclic codes have been used to obtain stabilizer QECCs with guaranteed quantum error correcting capability. This is facilitated by the fact that (i) self-orthogonal, binary cyclic codes are easily identified using transform approach and (ii) for such codes lower bounds on the minimum Hamming distance are known. Several explicit codes are constructed including two pure MDS QECCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.