855 resultados para SYSTEMATIC-ERROR CORRECTION


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the impact of a combined treatment of Systematic Error Correction and Repeated Reading on reading rate and errors for 18 year olds with undiagnosed reading difficulties on a Caribbean Island. In addition to direct daily measures of reading accuracy, the Reading Self Perception Scale was administered to determine whether the intervention was associated with changes in the way the student perceives himself as a reader.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infrared Earth sensors are used in spacecraft for attitude sensing. Their accuracy is limited by systematic and random errors. Dominant sources of systematic errors are analyzed for a typical scanning infrared Earth sensor used in a remote-sensing satellite in a 900-km sun-synchronous orbit. The errors considered arise from 1) seasonable variation of infrared radiation, 2) oblate shape of the Earth, 3) ambient temperature of sensors, 4) changes in spin/scan period, and 5) misalignment of the axis of the sensors. Simple relations are derived using least-squares curve fitting for onboard correction of these errors. With these, it is possible to improve the accuracy of attitude determination by eight fold and achieve performance comparable to ground-based post-facto attitude computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assimilation of temperature observations into an ocean model near the equator often results in a dynamically unbalanced state with unrealistic overturning circulations. The way in which these circulations arise from systematic errors in the model or its forcing is discussed. A scheme is proposed, based on the theory of state augmentation, which uses the departures of the model state from the observations to update slowly evolving bias fields. Results are summarized from an experiment applying this bias correction scheme to an ocean general circulation model. They show that the method produces more balanced analyses and a better fit to the temperature observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple error detecting and correcting procedure is described for nonbinary symbol words; here, the error position is located using the Hamming method and the correct symbol is substituted using a modulo-check procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that n-length stabilizer quantum error correcting codes (QECCs) can be obtained via n-length classical error correction codes (CECCs) over GF(4), that are additive and self-orthogonal with respect to the trace Hermitian inner product. But, most of the CECCs have been studied with respect to the Euclidean inner product. In this paper, it is shown that n-length stabilizer QECCs can be constructed via 371 length linear CECCs over GF(2) that are self-orthogonal with respect to the Euclidean inner product. This facilitates usage of the widely studied self-orthogonal CECCs to construct stabilizer QECCs. Moreover, classical, binary, self-orthogonal cyclic codes have been used to obtain stabilizer QECCs with guaranteed quantum error correcting capability. This is facilitated by the fact that (i) self-orthogonal, binary cyclic codes are easily identified using transform approach and (ii) for such codes lower bounds on the minimum Hamming distance are known. Several explicit codes are constructed including two pure MDS QECCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.